This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-03 09:23
Elapsed38m49s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 702 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 141 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-2vuj7s-kubeconfig; do sleep 1; done"
capz-2vuj7s-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-2vuj7s-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-2vuj7s-control-plane-f2tt2   NotReady   control-plane   9s    v1.26.0-alpha.0.376+e7192a49552483
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-2vuj7s-control-plane-f2tt2 condition met
node/capz-2vuj7s-md-0-7jqkw condition met
... skipping 63 lines ...
Dynamic Provisioning 
  should create a storage account with tags [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:73
STEP: Creating a kubernetes client
Sep  3 09:43:05.214: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Sep  3 09:43:05.908: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/03 09:43:06 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/03 09:43:06 Check successfully
... skipping 44 lines ...
Sep  3 09:43:29.784: INFO: PersistentVolumeClaim pvc-q9hm5 found but phase is Pending instead of Bound.
Sep  3 09:43:31.890: INFO: PersistentVolumeClaim pvc-q9hm5 found and phase=Bound (23.264218903s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  3 09:43:32.202: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-7c5tk" in namespace "azurefile-2540" to be "Succeeded or Failed"
Sep  3 09:43:32.305: INFO: Pod "azurefile-volume-tester-7c5tk": Phase="Pending", Reason="", readiness=false. Elapsed: 103.036701ms
Sep  3 09:43:34.414: INFO: Pod "azurefile-volume-tester-7c5tk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211941104s
Sep  3 09:43:36.524: INFO: Pod "azurefile-volume-tester-7c5tk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32226834s
Sep  3 09:43:38.634: INFO: Pod "azurefile-volume-tester-7c5tk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.432287481s
STEP: Saw pod success
Sep  3 09:43:38.634: INFO: Pod "azurefile-volume-tester-7c5tk" satisfied condition "Succeeded or Failed"
Sep  3 09:43:38.634: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-7c5tk"
Sep  3 09:43:38.754: INFO: Pod azurefile-volume-tester-7c5tk has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-7c5tk in namespace azurefile-2540
Sep  3 09:43:38.877: INFO: deleting PVC "azurefile-2540"/"pvc-q9hm5"
Sep  3 09:43:38.878: INFO: Deleting PersistentVolumeClaim "pvc-q9hm5"
... skipping 157 lines ...
Sep  3 09:45:37.076: INFO: PersistentVolumeClaim pvc-4qms9 found but phase is Pending instead of Bound.
Sep  3 09:45:39.180: INFO: PersistentVolumeClaim pvc-4qms9 found and phase=Bound (25.356311997s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  3 09:45:39.493: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-r7t67" in namespace "azurefile-2790" to be "Error status code"
Sep  3 09:45:39.596: INFO: Pod "azurefile-volume-tester-r7t67": Phase="Pending", Reason="", readiness=false. Elapsed: 103.108105ms
Sep  3 09:45:41.729: INFO: Pod "azurefile-volume-tester-r7t67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235559372s
Sep  3 09:45:43.839: INFO: Pod "azurefile-volume-tester-r7t67": Phase="Failed", Reason="", readiness=false. Elapsed: 4.345670552s
STEP: Saw pod failure
Sep  3 09:45:43.839: INFO: Pod "azurefile-volume-tester-r7t67" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  3 09:45:43.946: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-r7t67"
Sep  3 09:45:44.055: INFO: Pod azurefile-volume-tester-r7t67 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-r7t67 in namespace azurefile-2790
Sep  3 09:45:44.170: INFO: deleting PVC "azurefile-2790"/"pvc-4qms9"
... skipping 181 lines ...
Sep  3 09:47:46.111: INFO: PersistentVolumeClaim pvc-bsj5v found but phase is Pending instead of Bound.
Sep  3 09:47:48.215: INFO: PersistentVolumeClaim pvc-bsj5v found and phase=Bound (2.210403485s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  3 09:47:48.522: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-htkzk" in namespace "azurefile-4538" to be "Succeeded or Failed"
Sep  3 09:47:48.625: INFO: Pod "azurefile-volume-tester-htkzk": Phase="Pending", Reason="", readiness=false. Elapsed: 102.39204ms
Sep  3 09:47:50.734: INFO: Pod "azurefile-volume-tester-htkzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211731565s
Sep  3 09:47:52.847: INFO: Pod "azurefile-volume-tester-htkzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.324526538s
STEP: Saw pod success
Sep  3 09:47:52.847: INFO: Pod "azurefile-volume-tester-htkzk" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  3 09:48:23.879: INFO: deleting Pod "azurefile-4538"/"azurefile-volume-tester-htkzk"
... skipping 863 lines ...
I0903 09:37:13.333261       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662197832\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662197832\" (2022-09-03 08:37:11 +0000 UTC to 2023-09-03 08:37:11 +0000 UTC (now=2022-09-03 09:37:13.333233515 +0000 UTC))"
I0903 09:37:13.333510       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662197833\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662197832\" (2022-09-03 08:37:12 +0000 UTC to 2023-09-03 08:37:12 +0000 UTC (now=2022-09-03 09:37:13.333480616 +0000 UTC))"
I0903 09:37:13.333547       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0903 09:37:13.333940       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0903 09:37:13.335243       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0903 09:37:13.335515       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0903 09:37:16.955768       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0903 09:37:16.956181       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0903 09:37:21.319511       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0903 09:37:21.320318       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-2vuj7s-control-plane-f2tt2_cc0db257-5f82-4047-b7d2-6e58bc2f0648 became leader"
W0903 09:37:21.342504       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0903 09:37:21.343176       1 azure_auth.go:232] Using AzurePublicCloud environment
I0903 09:37:21.343239       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0903 09:37:21.343308       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0903 09:37:21.345491       1 reflector.go:221] Starting reflector *v1.Node (12h20m3.614702647s) from vendor/k8s.io/client-go/informers/factory.go:134
I0903 09:37:21.345514       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0903 09:37:21.345809       1 reflector.go:221] Starting reflector *v1.ServiceAccount (12h20m3.614702647s) from vendor/k8s.io/client-go/informers/factory.go:134
I0903 09:37:21.345833       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0903 09:37:21.346153       1 reflector.go:221] Starting reflector *v1.Secret (12h20m3.614702647s) from vendor/k8s.io/client-go/informers/factory.go:134
I0903 09:37:21.346310       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
W0903 09:37:21.379201       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0903 09:37:21.379232       1 controllermanager.go:573] Starting "garbagecollector"
I0903 09:37:21.401068       1 controllermanager.go:602] Started "garbagecollector"
I0903 09:37:21.401506       1 controllermanager.go:573] Starting "attachdetach"
I0903 09:37:21.401072       1 garbagecollector.go:154] Starting garbage collector controller
I0903 09:37:21.402123       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0903 09:37:21.402486       1 graph_builder.go:275] garbage controller monitor not synced: no monitors
... skipping 3 lines ...
I0903 09:37:21.413835       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0903 09:37:21.414072       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0903 09:37:21.414278       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0903 09:37:21.414517       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0903 09:37:21.414666       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0903 09:37:21.414889       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0903 09:37:21.415108       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 09:37:21.415381       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0903 09:37:21.415825       1 controllermanager.go:602] Started "attachdetach"
I0903 09:37:21.416098       1 controllermanager.go:573] Starting "ttl-after-finished"
I0903 09:37:21.416077       1 attach_detach_controller.go:328] Starting attach detach controller
I0903 09:37:21.416370       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0903 09:37:21.432467       1 controllermanager.go:602] Started "ttl-after-finished"
... skipping 221 lines ...
I0903 09:37:25.726546       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0903 09:37:25.726562       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0903 09:37:25.726624       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0903 09:37:25.726646       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0903 09:37:25.726661       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0903 09:37:25.726741       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0903 09:37:25.726853       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 09:37:25.726873       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0903 09:37:25.726996       1 controllermanager.go:602] Started "persistentvolume-binder"
I0903 09:37:25.727208       1 pv_controller_base.go:318] Starting persistent volume controller
I0903 09:37:25.727462       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0903 09:37:25.729437       1 reflector.go:221] Starting reflector *v1.Namespace (5m0s) from vendor/k8s.io/client-go/informers/factory.go:134
I0903 09:37:25.729596       1 reflector.go:257] Listing and watching *v1.Namespace from vendor/k8s.io/client-go/informers/factory.go:134
... skipping 251 lines ...
I0903 09:37:26.303611       1 shared_informer.go:262] Caches are synced for garbage collector
I0903 09:37:26.303621       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0903 09:37:26.340243       1 shared_informer.go:285] caches populated
I0903 09:37:26.340278       1 shared_informer.go:262] Caches are synced for garbage collector
I0903 09:37:26.340290       1 garbagecollector.go:263] synced garbage collector
I0903 09:37:29.047504       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-control-plane-f2tt2"
W0903 09:37:29.048116       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-2vuj7s-control-plane-f2tt2" does not exist
I0903 09:37:29.047713       1 controller.go:690] Syncing backends for all LB services.
I0903 09:37:29.048532       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 09:37:29.048697       1 controller.go:753] Finished updateLoadBalancerHosts
I0903 09:37:29.048870       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0903 09:37:29.049010       1 controller.go:686] It took 0.001343105 seconds to finish syncNodes
I0903 09:37:29.047921       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-2vuj7s-control-plane-f2tt2}
I0903 09:37:29.048002       1 topologycache.go:183] Ignoring node capz-2vuj7s-control-plane-f2tt2 because it is not ready: [{MemoryPressure False 2022-09-03 09:37:01 +0000 UTC 2022-09-03 09:37:01 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-03 09:37:01 +0000 UTC 2022-09-03 09:37:01 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-03 09:37:01 +0000 UTC 2022-09-03 09:37:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-03 09:37:01 +0000 UTC 2022-09-03 09:37:01 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0903 09:37:29.049532       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0903 09:37:29.049738       1 taint_manager.go:471] "Updating known taints on node" node="capz-2vuj7s-control-plane-f2tt2" taints=[]
I0903 09:37:29.081333       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-control-plane-f2tt2"
I0903 09:37:29.118266       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-control-plane-f2tt2"
I0903 09:37:29.122261       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-2vuj7s-control-plane-f2tt2" new_ttl="0s"
I0903 09:37:29.615925       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-control-plane-f2tt2"
... skipping 25 lines ...
I0903 09:37:30.653835       1 disruption.go:482] No matching pdb for pod "coredns-84994b8c4-4hc6m"
I0903 09:37:30.653892       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/coredns-84994b8c4-4hc6m" podUID=884b72ca-d676-437d-9308-a40a411385e3
I0903 09:37:30.653913       1 replica_set.go:394] Pod coredns-84994b8c4-4hc6m created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-84994b8c4-4hc6m", GenerateName:"coredns-84994b8c4-", Namespace:"kube-system", SelfLink:"", UID:"884b72ca-d676-437d-9308-a40a411385e3", ResourceVersion:"308", Generation:0, CreationTimestamp:time.Date(2022, time.September, 3, 9, 37, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"84994b8c4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"coredns-84994b8c4", UID:"42efde00-3509-45a8-b402-56fe665b0932", Controller:(*bool)(0xc000ee2f8f), BlockOwnerDeletion:(*bool)(0xc000ee2fc0)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 3, 9, 37, 30, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000367848), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"config-volume", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000c60700), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-chc8k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001293b40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"coredns", Image:"registry.k8s.io/coredns/coredns:v1.9.3", Command:[]string(nil), Args:[]string{"-conf", "/etc/coredns/Corefile"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"dns", HostPort:0, ContainerPort:53, Protocol:"UDP", HostIP:""}, v1.ContainerPort{Name:"dns-tcp", HostPort:0, ContainerPort:53, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:0, ContainerPort:9153, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:178257920, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"170Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:73400320, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"70Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"config-volume", ReadOnly:true, MountPath:"/etc/coredns", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-chc8k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc000c61700), ReadinessProbe:(*v1.Probe)(0xc000c61740), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0005d30e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ee3130), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"coredns", DeprecatedServiceAccount:"coredns", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00036aa10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc000367998), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ee31a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ee31c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc000ee31c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ee31cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001af8bd0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0903 09:37:30.654287       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/coredns-84994b8c4", timestamp:time.Time{wall:0xc0bce5f6a41955a9, ext:19179409469, loc:(*time.Location)(0x6f10040)}}
I0903 09:37:30.654329       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/coredns-84994b8c4-4hc6m"
I0903 09:37:30.654520       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="70.316776ms"
I0903 09:37:30.654545       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:37:30.654600       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 09:37:30.654582698 +0000 UTC m=+19.228352062"
I0903 09:37:30.660110       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 09:37:30 +0000 UTC - now: 2022-09-03 09:37:30.660100519 +0000 UTC m=+19.233869883]
I0903 09:37:30.660729       1 controller_utils.go:581] Controller coredns-84994b8c4 created pod coredns-84994b8c4-4hc6m
I0903 09:37:30.661565       1 event.go:294] "Event occurred" object="kube-system/coredns-84994b8c4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-84994b8c4-4hc6m"
I0903 09:37:30.708927       1 endpointslicemirroring_controller.go:278] syncEndpoints("kube-system/kube-dns")
I0903 09:37:30.708973       1 endpointslicemirroring_controller.go:313] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
... skipping 347 lines ...
I0903 09:38:03.226729       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/metrics-server-76f7667fbf", timestamp:time.Time{wall:0xc0bce5fecd838371, ext:51800492037, loc:(*time.Location)(0x6f10040)}}
I0903 09:38:03.227042       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/metrics-server-76f7667fbf" need=1 creating=1
I0903 09:38:03.227500       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-76f7667fbf to 1"
I0903 09:38:03.238144       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-03 09:38:03.22394197 +0000 UTC m=+51.797711334 - now: 2022-09-03 09:38:03.238133487 +0000 UTC m=+51.811902751]
I0903 09:38:03.238692       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/metrics-server"
I0903 09:38:03.253597       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="49.56526ms"
I0903 09:38:03.253738       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:38:03.253830       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-03 09:38:03.253810206 +0000 UTC m=+51.827579570"
I0903 09:38:03.254509       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-03 09:38:03 +0000 UTC - now: 2022-09-03 09:38:03.254496607 +0000 UTC m=+51.828265971]
I0903 09:38:03.259542       1 disruption.go:479] addPod called on pod "metrics-server-76f7667fbf-9vk4h"
I0903 09:38:03.261383       1 disruption.go:570] No PodDisruptionBudgets found for pod metrics-server-76f7667fbf-9vk4h, PodDisruptionBudget controller will avoid syncing.
I0903 09:38:03.261722       1 disruption.go:482] No matching pdb for pod "metrics-server-76f7667fbf-9vk4h"
I0903 09:38:03.262224       1 endpoints_controller.go:369] Finished syncing service "kube-system/metrics-server" endpoints. (48µs)
... skipping 71 lines ...
I0903 09:38:05.689287       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/calico-kube-controllers-755ff8d7b5-r5bvx"
I0903 09:38:05.696125       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-755ff8d7b5"
I0903 09:38:05.699028       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (23.072311ms)
I0903 09:38:05.699071       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bce5ff684e44a6, ext:54249987386, loc:(*time.Location)(0x6f10040)}}
I0903 09:38:05.699418       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 09:38:05.699755       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="29.354614ms"
I0903 09:38:05.699983       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:38:05.700521       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 09:38:05.700486234 +0000 UTC m=+54.274255498"
I0903 09:38:05.701065       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 09:38:05 +0000 UTC - now: 2022-09-03 09:38:05.701058634 +0000 UTC m=+54.274827898]
I0903 09:38:05.708939       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (9.871705ms)
I0903 09:38:05.710127       1 disruption.go:494] updatePod called on pod "calico-kube-controllers-755ff8d7b5-r5bvx"
I0903 09:38:05.710168       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-kube-controllers-755ff8d7b5-r5bvx, PodDisruptionBudget controller will avoid syncing.
I0903 09:38:05.710213       1 disruption.go:497] No matching pdb for pod "calico-kube-controllers-755ff8d7b5-r5bvx"
... skipping 241 lines ...
I0903 09:38:25.435747       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-ac1fp0" (9.5µs)
I0903 09:38:25.761273       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 09:38:25.792869       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 09:38:25.827320       1 gc_controller.go:221] GC'ing orphaned
I0903 09:38:25.827397       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0903 09:38:25.843773       1 pv_controller_base.go:612] resyncing PV controller
I0903 09:38:25.893113       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-2vuj7s-control-plane-f2tt2 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 09:37:41 +0000 UTC,LastTransitionTime:2022-09-03 09:37:01 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 09:38:22 +0000 UTC,LastTransitionTime:2022-09-03 09:38:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 09:38:25.893510       1 node_lifecycle_controller.go:1092] Node capz-2vuj7s-control-plane-f2tt2 ReadyCondition updated. Updating timestamp.
I0903 09:38:25.893709       1 node_lifecycle_controller.go:938] Node capz-2vuj7s-control-plane-f2tt2 is healthy again, removing all taints
I0903 09:38:25.893906       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
E0903 09:38:25.905153       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0903 09:38:25.905434       1 resource_quota_controller.go:443] syncing resource quota controller with updated resources from discovery: added: [crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0903 09:38:25.905721       1 resource_quota_monitor.go:166] QuotaMonitor using a shared informer for resource "crd.projectcalico.org/v1, Resource=networksets"
... skipping 9 lines ...
I0903 09:38:25.906787       1 reflector.go:257] Listing and watching *v1.PartialObjectMetadata from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0903 09:38:25.910940       1 resource_quota_monitor.go:283] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0903 09:38:26.012579       1 resource_quota_monitor.go:283] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0903 09:38:26.111276       1 shared_informer.go:285] caches populated
I0903 09:38:26.111339       1 shared_informer.go:262] Caches are synced for resource quota
I0903 09:38:26.111351       1 resource_quota_controller.go:462] synced quota controller
W0903 09:38:26.400052       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0903 09:38:26.400502       1 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0903 09:38:26.400614       1 garbagecollector.go:226] reset restmapper
E0903 09:38:26.424537       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0903 09:38:26.426439       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0903 09:38:26.427644       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=caliconodestatuses", kind "crd.projectcalico.org/v1, Kind=CalicoNodeStatus"
I0903 09:38:26.427727       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=ippools", kind "crd.projectcalico.org/v1, Kind=IPPool"
... skipping 240 lines ...
I0903 09:38:51.528843       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="362.102µs"
I0903 09:38:55.762227       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 09:38:55.794011       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 09:38:55.845463       1 pv_controller_base.go:612] resyncing PV controller
E0903 09:38:56.125847       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0903 09:38:56.125938       1 resource_quota_controller.go:432] no resource updates from discovery, skipping resource quota sync
W0903 09:38:57.480083       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0903 09:38:57.640633       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="181.502µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:57602" resp=200
I0903 09:39:03.403084       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-control-plane-f2tt2"
I0903 09:39:03.580075       1 disruption.go:494] updatePod called on pod "metrics-server-76f7667fbf-9vk4h"
I0903 09:39:03.580419       1 disruption.go:570] No PodDisruptionBudgets found for pod metrics-server-76f7667fbf-9vk4h, PodDisruptionBudget controller will avoid syncing.
I0903 09:39:03.580585       1 disruption.go:497] No matching pdb for pod "metrics-server-76f7667fbf-9vk4h"
I0903 09:39:03.581004       1 endpoints_controller.go:528] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0
... skipping 60 lines ...
I0903 09:39:25.832039       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0903 09:39:25.846599       1 pv_controller_base.go:612] resyncing PV controller
I0903 09:39:26.146938       1 resource_quota_controller.go:432] no resource updates from discovery, skipping resource quota sync
I0903 09:39:27.641472       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="226.7µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:43292" resp=200
I0903 09:39:37.640905       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="142.9µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:38722" resp=200
I0903 09:39:38.736767       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-pdqzz"
W0903 09:39:38.736811       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-2vuj7s-md-0-pdqzz" does not exist
I0903 09:39:38.741891       1 controller.go:690] Syncing backends for all LB services.
I0903 09:39:38.741917       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 09:39:38.741932       1 controller.go:753] Finished updateLoadBalancerHosts
I0903 09:39:38.741938       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0903 09:39:38.742098       1 controller.go:686] It took 0.000207501 seconds to finish syncNodes
I0903 09:39:38.742275       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-2vuj7s-md-0-pdqzz}
I0903 09:39:38.742315       1 taint_manager.go:471] "Updating known taints on node" node="capz-2vuj7s-md-0-pdqzz" taints=[]
I0903 09:39:38.742447       1 topologycache.go:179] Ignoring node capz-2vuj7s-control-plane-f2tt2 because it has an excluded label
I0903 09:39:38.742597       1 topologycache.go:183] Ignoring node capz-2vuj7s-md-0-pdqzz because it is not ready: [{MemoryPressure False 2022-09-03 09:39:38 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-03 09:39:38 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-03 09:39:38 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-03 09:39:38 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-2vuj7s-md-0-pdqzz" not found]}]
I0903 09:39:38.742762       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0903 09:39:38.743755       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bce5f84249c676, ext:25612158730, loc:(*time.Location)(0x6f10040)}}
I0903 09:39:38.744171       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bce616ac5b0add, ext:147317933425, loc:(*time.Location)(0x6f10040)}}
I0903 09:39:38.744568       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-2vuj7s-md-0-pdqzz], creating 1
I0903 09:39:38.745734       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bce6071fa2f9c3, ext:85104543831, loc:(*time.Location)(0x6f10040)}}
I0903 09:39:38.746007       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bce616ac7710b2, ext:147319769926, loc:(*time.Location)(0x6f10040)}}
... skipping 211 lines ...
I0903 09:39:58.609125       1 controller.go:690] Syncing backends for all LB services.
I0903 09:39:58.609161       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 09:39:58.609180       1 controller.go:753] Finished updateLoadBalancerHosts
I0903 09:39:58.609236       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0903 09:39:58.609246       1 controller.go:686] It took 0.000231307 seconds to finish syncNodes
I0903 09:39:58.609318       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-7jqkw"
W0903 09:39:58.609357       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-2vuj7s-md-0-7jqkw" does not exist
I0903 09:39:58.609746       1 topologycache.go:179] Ignoring node capz-2vuj7s-control-plane-f2tt2 because it has an excluded label
I0903 09:39:58.609937       1 topologycache.go:183] Ignoring node capz-2vuj7s-md-0-pdqzz because it is not ready: [{MemoryPressure False 2022-09-03 09:39:49 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-03 09:39:49 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-03 09:39:49 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-03 09:39:49 +0000 UTC 2022-09-03 09:39:38 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0903 09:39:58.610132       1 topologycache.go:183] Ignoring node capz-2vuj7s-md-0-7jqkw because it is not ready: [{MemoryPressure False 2022-09-03 09:39:58 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-03 09:39:58 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-03 09:39:58 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-03 09:39:58 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-2vuj7s-md-0-7jqkw" not found]}]
I0903 09:39:58.610409       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0903 09:39:58.610393       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-2vuj7s-md-0-7jqkw}
I0903 09:39:58.610444       1 taint_manager.go:471] "Updating known taints on node" node="capz-2vuj7s-md-0-7jqkw" taints=[]
I0903 09:39:58.611143       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bce618faf7a601, ext:156563077681, loc:(*time.Location)(0x6f10040)}}
I0903 09:39:58.612032       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bce61ba47ac23e, ext:167185794258, loc:(*time.Location)(0x6f10040)}}
I0903 09:39:58.612234       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-2vuj7s-md-0-7jqkw], creating 1
... skipping 196 lines ...
I0903 09:40:09.274072       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 09:40:09.274102       1 controller.go:753] Finished updateLoadBalancerHosts
I0903 09:40:09.274110       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0903 09:40:09.274117       1 controller.go:686] It took 0.002076711 seconds to finish syncNodes
I0903 09:40:09.272272       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-2vuj7s-md-0-pdqzz"
I0903 09:40:09.272294       1 topologycache.go:179] Ignoring node capz-2vuj7s-control-plane-f2tt2 because it has an excluded label
I0903 09:40:09.274574       1 topologycache.go:183] Ignoring node capz-2vuj7s-md-0-7jqkw because it is not ready: [{MemoryPressure False 2022-09-03 09:40:09 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-03 09:40:09 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-03 09:40:09 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-03 09:40:09 +0000 UTC 2022-09-03 09:39:58 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0903 09:40:09.274663       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0903 09:40:09.285273       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-2vuj7s-md-0-pdqzz" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0903 09:40:09.287025       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-pdqzz"
I0903 09:40:09.473554       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-2vuj7s/providers/Microsoft.Compute/virtualMachines/capz-2vuj7s-md-0-7jqkw), assuming it is managed by availability set: not a vmss instance
I0903 09:40:09.473727       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-2vuj7s/providers/Microsoft.Compute/virtualMachines/capz-2vuj7s-md-0-7jqkw), assuming it is managed by availability set: not a vmss instance
I0903 09:40:09.473834       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-2vuj7s-md-0-7jqkw"
... skipping 26 lines ...
I0903 09:40:10.259731       1 daemon_controller.go:1119] Updating daemon set status
I0903 09:40:10.259870       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/kube-proxy" (1.90381ms)
I0903 09:40:10.427518       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-pdqzz"
I0903 09:40:10.796994       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 09:40:10.848754       1 pv_controller_base.go:612] resyncing PV controller
I0903 09:40:10.909923       1 node_lifecycle_controller.go:1092] Node capz-2vuj7s-md-0-7jqkw ReadyCondition updated. Updating timestamp.
I0903 09:40:10.910193       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-2vuj7s-md-0-pdqzz transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 09:39:59 +0000 UTC,LastTransitionTime:2022-09-03 09:39:38 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 09:40:09 +0000 UTC,LastTransitionTime:2022-09-03 09:40:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 09:40:10.910451       1 node_lifecycle_controller.go:1092] Node capz-2vuj7s-md-0-pdqzz ReadyCondition updated. Updating timestamp.
I0903 09:40:10.925482       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-pdqzz"
I0903 09:40:10.926319       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-2vuj7s-md-0-pdqzz}
I0903 09:40:10.926357       1 taint_manager.go:471] "Updating known taints on node" node="capz-2vuj7s-md-0-pdqzz" taints=[]
I0903 09:40:10.926381       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-2vuj7s-md-0-pdqzz"
I0903 09:40:10.927492       1 node_lifecycle_controller.go:938] Node capz-2vuj7s-md-0-pdqzz is healthy again, removing all taints
... skipping 131 lines ...
I0903 09:40:29.559964       1 topologycache.go:179] Ignoring node capz-2vuj7s-control-plane-f2tt2 because it has an excluded label
I0903 09:40:29.559990       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=4000) CPU, true)
I0903 09:40:29.571601       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-7jqkw"
I0903 09:40:29.572888       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-2vuj7s-md-0-7jqkw" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0903 09:40:29.788090       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-pdqzz"
I0903 09:40:30.931397       1 node_lifecycle_controller.go:1092] Node capz-2vuj7s-md-0-pdqzz ReadyCondition updated. Updating timestamp.
I0903 09:40:30.931891       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-2vuj7s-md-0-7jqkw transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 09:40:19 +0000 UTC,LastTransitionTime:2022-09-03 09:39:58 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 09:40:29 +0000 UTC,LastTransitionTime:2022-09-03 09:40:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 09:40:30.932216       1 node_lifecycle_controller.go:1092] Node capz-2vuj7s-md-0-7jqkw ReadyCondition updated. Updating timestamp.
I0903 09:40:30.976524       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-2vuj7s-md-0-7jqkw"
I0903 09:40:30.977186       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-2vuj7s-md-0-7jqkw}
I0903 09:40:30.977213       1 taint_manager.go:471] "Updating known taints on node" node="capz-2vuj7s-md-0-7jqkw" taints=[]
I0903 09:40:30.977253       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-2vuj7s-md-0-7jqkw"
I0903 09:40:30.978872       1 node_lifecycle_controller.go:938] Node capz-2vuj7s-md-0-7jqkw is healthy again, removing all taints
... skipping 28 lines ...
I0903 09:40:38.039446       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-03 09:40:38.019249117 +0000 UTC m=+206.593018481 - now: 2022-09-03 09:40:38.039414477 +0000 UTC m=+206.613183841]
I0903 09:40:38.047043       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-f86v4
I0903 09:40:38.048156       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-f86v4"
I0903 09:40:38.049904       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-f86v4"
I0903 09:40:38.052686       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-f86v4, PodDisruptionBudget controller will avoid syncing.
I0903 09:40:38.053144       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-f86v4"
I0903 09:40:38.053538       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-f86v4 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-f86v4", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"5c90b0b7-7d56-4450-8810-7fc30f17502e", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2022, time.September, 3, 9, 40, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"ea1f4865-742c-4ea5-812d-df72da12d355", Controller:(*bool)(0xc001ee799e), BlockOwnerDeletion:(*bool)(0xc001ee799f)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 3, 9, 40, 38, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ecd698), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001ecd6f8), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ecd710), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-lv6df", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000efd500), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lv6df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lv6df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lv6df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lv6df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lv6df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000efd620)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lv6df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002510b00), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ee7de0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00036a460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ee7e50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ee7e70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001ee7e78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ee7e7c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0025ed4d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0903 09:40:38.055638       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bce6258189d68c, ext:206599579936, loc:(*time.Location)(0x6f10040)}}
I0903 09:40:38.056092       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-f86v4"
I0903 09:40:38.056439       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-f86v4" podUID=5c90b0b7-7d56-4450-8810-7fc30f17502e
I0903 09:40:38.056836       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="48.434162ms"
I0903 09:40:38.057670       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:40:38.058050       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-03 09:40:38.057946948 +0000 UTC m=+206.631716212"
I0903 09:40:38.064524       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-03 09:40:38 +0000 UTC - now: 2022-09-03 09:40:38.064455302 +0000 UTC m=+206.638224566]
I0903 09:40:38.066480       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-nqcms"
I0903 09:40:38.069172       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-nqcms, PodDisruptionBudget controller will avoid syncing.
I0903 09:40:38.069351       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-nqcms"
I0903 09:40:38.069508       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-nqcms" podUID=26b370e0-0c4c-49a3-ae0a-fac93f05644c
I0903 09:40:38.069820       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-nqcms created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-nqcms", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"26b370e0-0c4c-49a3-ae0a-fac93f05644c", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2022, time.September, 3, 9, 40, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"ea1f4865-742c-4ea5-812d-df72da12d355", Controller:(*bool)(0xc001ee7f9e), BlockOwnerDeletion:(*bool)(0xc001ee7f9f)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 3, 9, 40, 38, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ecdcb0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001ecdcc8), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ecdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-6m5px", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000efd740), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-6m5px", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-6m5px", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-6m5px", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-6m5px", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-6m5px", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000efd860)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-6m5px", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002511380), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0023d03d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00036aaf0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023d0460)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023d0480)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0023d0488), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0023d048c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0025ed6f0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0903 09:40:38.071211       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bce6258189d68c, ext:206599579936, loc:(*time.Location)(0x6f10040)}}
I0903 09:40:38.071538       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-nqcms"
I0903 09:40:38.072905       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-nqcms
I0903 09:40:38.073804       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-nqcms"
I0903 09:40:38.075280       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-7847f46f86, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0903 09:40:38.086651       1 disruption.go:494] updatePod called on pod "csi-azurefile-controller-7847f46f86-f86v4"
... skipping 232 lines ...
I0903 09:40:46.912597       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-84ccd6c756, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 09:40:46.930810       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-snapshot-controller-84ccd6c756"
I0903 09:40:46.931137       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-snapshot-controller-84ccd6c756" (20.03337ms)
I0903 09:40:46.931312       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-snapshot-controller-84ccd6c756", timestamp:time.Time{wall:0xc0bce627b2b28788, ext:215424330268, loc:(*time.Location)(0x6f10040)}}
I0903 09:40:46.931445       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-snapshot-controller-84ccd6c756" (142.301µs)
I0903 09:40:46.934922       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="95.675634ms"
I0903 09:40:46.934986       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:40:46.935051       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-03 09:40:46.935023399 +0000 UTC m=+215.508792763"
I0903 09:40:46.951102       1 disruption.go:494] updatePod called on pod "csi-snapshot-controller-84ccd6c756-8gljg"
I0903 09:40:46.951232       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-snapshot-controller-84ccd6c756-8gljg, PodDisruptionBudget controller will avoid syncing.
I0903 09:40:46.951243       1 disruption.go:497] No matching pdb for pod "csi-snapshot-controller-84ccd6c756-8gljg"
I0903 09:40:46.951301       1 replica_set.go:457] Pod csi-snapshot-controller-84ccd6c756-8gljg updated, objectMeta {Name:csi-snapshot-controller-84ccd6c756-8gljg GenerateName:csi-snapshot-controller-84ccd6c756- Namespace:kube-system SelfLink: UID:ab9fbf4a-27fc-47b9-84a8-ee97e6604dd1 ResourceVersion:1118 Generation:0 CreationTimestamp:2022-09-03 09:40:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:84ccd6c756] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-84ccd6c756 UID:1c7af77c-932f-4488-a86c-d3f8ddcd87b4 Controller:0xc00228bde7 BlockOwnerDeletion:0xc00228bde8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 09:40:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c7af77c-932f-4488-a86c-d3f8ddcd87b4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-84ccd6c756-8gljg GenerateName:csi-snapshot-controller-84ccd6c756- Namespace:kube-system SelfLink: UID:ab9fbf4a-27fc-47b9-84a8-ee97e6604dd1 ResourceVersion:1128 Generation:0 CreationTimestamp:2022-09-03 09:40:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:84ccd6c756] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-84ccd6c756 UID:1c7af77c-932f-4488-a86c-d3f8ddcd87b4 Controller:0xc00256d3de BlockOwnerDeletion:0xc00256d3df}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 09:40:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c7af77c-932f-4488-a86c-d3f8ddcd87b4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 09:40:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0903 09:40:46.951561       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-snapshot-controller-84ccd6c756", timestamp:time.Time{wall:0xc0bce627b2b28788, ext:215424330268, loc:(*time.Location)(0x6f10040)}}
... skipping 1481 lines ...
I0903 09:45:53.946618       1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-5356/pvc-pr8gv"
I0903 09:45:53.943880       1 replica_set.go:394] Pod azurefile-volume-tester-xrkv7-dd96cff7c-rkbn4 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azurefile-volume-tester-xrkv7-dd96cff7c-rkbn4", GenerateName:"azurefile-volume-tester-xrkv7-dd96cff7c-", Namespace:"azurefile-5356", SelfLink:"", UID:"60c445e2-9cfb-41e3-b9ad-2adca8d0ac92", ResourceVersion:"2183", Generation:0, CreationTimestamp:time.Date(2022, time.September, 3, 9, 45, 53, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azurefile-volume-tester-5018949295715050020", "pod-template-hash":"dd96cff7c"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azurefile-volume-tester-xrkv7-dd96cff7c", UID:"612cfdfc-0a4c-4fa7-b8b9-d8d6e8fb8647", Controller:(*bool)(0xc002793fa0), BlockOwnerDeletion:(*bool)(0xc002793fa1)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 3, 9, 45, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00281f728), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc00281f740), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-g9j5c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0028de7a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 100; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-g9j5c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002836078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007930a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028360b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028360d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0028360d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028360dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0020e6240), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0903 09:45:53.946688       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c", timestamp:time.Time{wall:0xc0bce67477600b7b, ext:522502810639, loc:(*time.Location)(0x6f10040)}}
I0903 09:45:53.944330       1 taint_manager.go:431] "Noticed pod update" pod="azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c-rkbn4"
I0903 09:45:53.946887       1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-5356/pvc-pr8gv" duration="174.902µs"
I0903 09:45:53.950040       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" duration="32.780563ms"
I0903 09:45:53.950092       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-xrkv7\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:45:53.950132       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" startTime="2022-09-03 09:45:53.950114344 +0000 UTC m=+522.523883608"
I0903 09:45:53.951084       1 deployment_util.go:775] Deployment "azurefile-volume-tester-xrkv7" timed out (false) [last progress check: 2022-09-03 09:45:53 +0000 UTC - now: 2022-09-03 09:45:53.951074252 +0000 UTC m=+522.524843516]
I0903 09:45:53.954013       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c"
I0903 09:45:53.957587       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c" (28.930232ms)
I0903 09:45:53.957637       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c", timestamp:time.Time{wall:0xc0bce67477600b7b, ext:522502810639, loc:(*time.Location)(0x6f10040)}}
I0903 09:45:53.957936       1 replica_set_utils.go:59] Updating status for : azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 127 lines ...
I0903 09:46:01.178797       1 disruption.go:497] No matching pdb for pod "azurefile-volume-tester-xrkv7-dd96cff7c-jgpmh"
I0903 09:46:01.178545       1 replica_set.go:457] Pod azurefile-volume-tester-xrkv7-dd96cff7c-jgpmh updated, objectMeta {Name:azurefile-volume-tester-xrkv7-dd96cff7c-jgpmh GenerateName:azurefile-volume-tester-xrkv7-dd96cff7c- Namespace:azurefile-5356 SelfLink: UID:e2945949-a058-4200-8ef8-9c8702c35d40 ResourceVersion:2233 Generation:0 CreationTimestamp:2022-09-03 09:46:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5018949295715050020 pod-template-hash:dd96cff7c] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-xrkv7-dd96cff7c UID:612cfdfc-0a4c-4fa7-b8b9-d8d6e8fb8647 Controller:0xc00249cd97 BlockOwnerDeletion:0xc00249cd98}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 09:46:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"612cfdfc-0a4c-4fa7-b8b9-d8d6e8fb8647\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azurefile-volume-tester-xrkv7-dd96cff7c-jgpmh GenerateName:azurefile-volume-tester-xrkv7-dd96cff7c- Namespace:azurefile-5356 SelfLink: UID:e2945949-a058-4200-8ef8-9c8702c35d40 ResourceVersion:2239 Generation:0 CreationTimestamp:2022-09-03 09:46:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5018949295715050020 pod-template-hash:dd96cff7c] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-xrkv7-dd96cff7c UID:612cfdfc-0a4c-4fa7-b8b9-d8d6e8fb8647 Controller:0xc0027d2790 BlockOwnerDeletion:0xc0027d2791}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 09:46:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"612cfdfc-0a4c-4fa7-b8b9-d8d6e8fb8647\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 09:46:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0903 09:46:01.179559       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c", timestamp:time.Time{wall:0xc0bce6764410fa0c, ext:529641990716, loc:(*time.Location)(0x6f10040)}}
I0903 09:46:01.179716       1 controller_utils.go:938] Ignoring inactive pod azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c-rkbn4 in state Running, deletion time 2022-09-03 09:46:31 +0000 UTC
I0903 09:46:01.179844       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-5356/azurefile-volume-tester-xrkv7-dd96cff7c" (292.502µs)
I0903 09:46:01.183362       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" duration="13.964686ms"
I0903 09:46:01.183611       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-xrkv7\": the object has been modified; please apply your changes to the latest version and try again"
I0903 09:46:01.183982       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" startTime="2022-09-03 09:46:01.183960457 +0000 UTC m=+529.757729721"
I0903 09:46:01.223748       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7"
I0903 09:46:01.224400       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" duration="40.426147ms"
I0903 09:46:01.224614       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" startTime="2022-09-03 09:46:01.224435104 +0000 UTC m=+529.798204468"
I0903 09:46:01.225288       1 progress.go:195] Queueing up deployment "azurefile-volume-tester-xrkv7" for a progress check after 594s
I0903 09:46:01.225519       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-xrkv7" duration="933.406µs"
... skipping 1177 lines ...
I0903 09:49:08.425798       1 namespace_controller.go:180] Finished syncing namespace "azurefile-8666" (54.5µs)
2022/09/03 09:49:09 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 363.873 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-chf7b, container manager
STEP: Dumping workload cluster default/capz-2vuj7s logs
Sep  3 09:50:40.872: INFO: Collecting logs for Linux node capz-2vuj7s-control-plane-f2tt2 in cluster capz-2vuj7s in namespace default

Sep  3 09:51:40.874: INFO: Collecting boot logs for AzureMachine capz-2vuj7s-control-plane-f2tt2

Failed to get logs for machine capz-2vuj7s-control-plane-tp29m, cluster default/capz-2vuj7s: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 09:51:42.194: INFO: Collecting logs for Linux node capz-2vuj7s-md-0-7jqkw in cluster capz-2vuj7s in namespace default

Sep  3 09:52:42.197: INFO: Collecting boot logs for AzureMachine capz-2vuj7s-md-0-7jqkw

Failed to get logs for machine capz-2vuj7s-md-0-776d6dc6fc-52zjj, cluster default/capz-2vuj7s: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 09:52:42.734: INFO: Collecting logs for Linux node capz-2vuj7s-md-0-pdqzz in cluster capz-2vuj7s in namespace default

Sep  3 09:53:42.735: INFO: Collecting boot logs for AzureMachine capz-2vuj7s-md-0-pdqzz

Failed to get logs for machine capz-2vuj7s-md-0-776d6dc6fc-97tsp, cluster default/capz-2vuj7s: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-2vuj7s kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-node-x6ngm
STEP: Creating log watcher for controller kube-system/calico-node-v5plh, container calico-node
STEP: Fetching kube-system pod logs took 1.100483544s
STEP: Dumping workload cluster default/capz-2vuj7s Azure activity log
STEP: Collecting events for Pod kube-system/calico-node-v5plh
STEP: Creating log watcher for controller kube-system/calico-node-x6ngm, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-2vuj7s-control-plane-f2tt2, container kube-apiserver
STEP: Collecting events for Pod kube-system/csi-azurefile-node-tdc8g
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pjj26, container azurefile
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-2vuj7s-control-plane-f2tt2
STEP: failed to find events of Pod "kube-apiserver-capz-2vuj7s-control-plane-f2tt2"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-2vuj7s-control-plane-f2tt2, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-wwwqm, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-kube-controllers-755ff8d7b5-r5bvx
STEP: Collecting events for Pod kube-system/kube-proxy-85tfk
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-8gljg, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-2vuj7s-control-plane-f2tt2
STEP: failed to find events of Pod "kube-controller-manager-capz-2vuj7s-control-plane-f2tt2"
STEP: Creating log watcher for controller kube-system/kube-proxy-85tfk, container kube-proxy
STEP: Collecting events for Pod kube-system/csi-azurefile-node-pjj26
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-tdc8g, container liveness-probe
STEP: Collecting events for Pod kube-system/kube-proxy-wwwqm
STEP: Creating log watcher for controller kube-system/kube-proxy-tsr5l, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-2vuj7s-control-plane-f2tt2, container kube-scheduler
... skipping 3 lines ...
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-tdc8g, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-qbsqr, container csi-snapshot-controller
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-r5bvx, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pjj26, container node-driver-registrar
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-2vuj7s-control-plane-f2tt2
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-f86v4
STEP: failed to find events of Pod "kube-scheduler-capz-2vuj7s-control-plane-f2tt2"
STEP: Creating log watcher for controller kube-system/calico-node-zqsm4, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-zqsm4
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-4d48k, container coredns
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-4d48k
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-4hc6m, container coredns
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-4hc6m
... skipping 17 lines ...
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-nqcms
STEP: Collecting events for Pod kube-system/etcd-capz-2vuj7s-control-plane-f2tt2
STEP: Collecting events for Pod kube-system/csi-azurefile-node-j6qtd
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-tdc8g, container azurefile
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pjj26, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-j6qtd, container azurefile
STEP: failed to find events of Pod "etcd-capz-2vuj7s-control-plane-f2tt2"
STEP: Fetching activity logs took 4.304007863s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-2vuj7s" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...