This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-02 23:43
Elapsed36m4s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 704 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 134 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-5ibqsb-kubeconfig; do sleep 1; done"
capz-5ibqsb-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-5ibqsb-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-5ibqsb-control-plane-r98ch   NotReady   control-plane   4s    v1.26.0-alpha.0.370+bacd6029b3bac1
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-5ibqsb-control-plane-r98ch condition met
node/capz-5ibqsb-mp-0000000 condition met
... skipping 53 lines ...
Dynamic Provisioning 
  should create a storage account with tags [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:73
STEP: Creating a kubernetes client
Sep  3 00:02:05.075: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Sep  3 00:02:05.510: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/03 00:02:05 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/03 00:02:06 Check successfully
... skipping 44 lines ...
Sep  3 00:02:27.855: INFO: PersistentVolumeClaim pvc-p8stc found but phase is Pending instead of Bound.
Sep  3 00:02:29.913: INFO: PersistentVolumeClaim pvc-p8stc found and phase=Bound (22.696111678s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  3 00:02:30.086: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-5mmzs" in namespace "azurefile-2540" to be "Succeeded or Failed"
Sep  3 00:02:30.148: INFO: Pod "azurefile-volume-tester-5mmzs": Phase="Pending", Reason="", readiness=false. Elapsed: 62.617565ms
Sep  3 00:02:32.209: INFO: Pod "azurefile-volume-tester-5mmzs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123320436s
Sep  3 00:02:34.270: INFO: Pod "azurefile-volume-tester-5mmzs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184001498s
Sep  3 00:02:36.330: INFO: Pod "azurefile-volume-tester-5mmzs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244830207s
STEP: Saw pod success
Sep  3 00:02:36.330: INFO: Pod "azurefile-volume-tester-5mmzs" satisfied condition "Succeeded or Failed"
Sep  3 00:02:36.330: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-5mmzs"
Sep  3 00:02:36.401: INFO: Pod azurefile-volume-tester-5mmzs has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-5mmzs in namespace azurefile-2540
Sep  3 00:02:36.468: INFO: deleting PVC "azurefile-2540"/"pvc-p8stc"
Sep  3 00:02:36.468: INFO: Deleting PersistentVolumeClaim "pvc-p8stc"
... skipping 156 lines ...
Sep  3 00:04:28.065: INFO: PersistentVolumeClaim pvc-bcrjc found but phase is Pending instead of Bound.
Sep  3 00:04:30.123: INFO: PersistentVolumeClaim pvc-bcrjc found and phase=Bound (22.696495361s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  3 00:04:30.295: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-gr2dd" in namespace "azurefile-2790" to be "Error status code"
Sep  3 00:04:30.352: INFO: Pod "azurefile-volume-tester-gr2dd": Phase="Pending", Reason="", readiness=false. Elapsed: 56.843055ms
Sep  3 00:04:32.412: INFO: Pod "azurefile-volume-tester-gr2dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116170832s
Sep  3 00:04:34.474: INFO: Pod "azurefile-volume-tester-gr2dd": Phase="Failed", Reason="", readiness=false. Elapsed: 4.178099783s
STEP: Saw pod failure
Sep  3 00:04:34.474: INFO: Pod "azurefile-volume-tester-gr2dd" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  3 00:04:34.532: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-gr2dd"
Sep  3 00:04:34.591: INFO: Pod azurefile-volume-tester-gr2dd has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-gr2dd in namespace azurefile-2790
Sep  3 00:04:34.660: INFO: deleting PVC "azurefile-2790"/"pvc-bcrjc"
... skipping 180 lines ...
Sep  3 00:06:28.380: INFO: PersistentVolumeClaim pvc-sg8mr found but phase is Pending instead of Bound.
Sep  3 00:06:30.438: INFO: PersistentVolumeClaim pvc-sg8mr found and phase=Bound (2.114451149s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  3 00:06:30.609: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-hcgdl" in namespace "azurefile-4538" to be "Succeeded or Failed"
Sep  3 00:06:30.665: INFO: Pod "azurefile-volume-tester-hcgdl": Phase="Pending", Reason="", readiness=false. Elapsed: 55.885262ms
Sep  3 00:06:32.725: INFO: Pod "azurefile-volume-tester-hcgdl": Phase="Running", Reason="", readiness=true. Elapsed: 2.116262701s
Sep  3 00:06:34.786: INFO: Pod "azurefile-volume-tester-hcgdl": Phase="Running", Reason="", readiness=false. Elapsed: 4.176709203s
Sep  3 00:06:36.845: INFO: Pod "azurefile-volume-tester-hcgdl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236067193s
STEP: Saw pod success
Sep  3 00:06:36.845: INFO: Pod "azurefile-volume-tester-hcgdl" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  3 00:07:07.670: INFO: deleting Pod "azurefile-4538"/"azurefile-volume-tester-hcgdl"
... skipping 863 lines ...
I0902 23:56:41.440829       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662163001\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662163001\" (2022-09-02 22:56:40 +0000 UTC to 2023-09-02 22:56:40 +0000 UTC (now=2022-09-02 23:56:41.440788006 +0000 UTC))"
I0902 23:56:41.440871       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0902 23:56:41.441099       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0902 23:56:41.440841       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0902 23:56:41.441409       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0902 23:56:41.441446       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0902 23:56:46.442766       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": context deadline exceeded
I0902 23:56:46.442823       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0902 23:56:50.128639       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0902 23:56:50.129055       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0902 23:56:50.361998       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="102.6µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:60276" resp=200
I0902 23:56:53.634016       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0902 23:56:53.634636       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-5ibqsb-control-plane-r98ch_8a7eb047-746a-4def-831d-86d7a4b1ecce became leader"
W0902 23:56:53.681479       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0902 23:56:53.682364       1 azure_auth.go:232] Using AzurePublicCloud environment
I0902 23:56:53.682517       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
... skipping 32 lines ...
I0902 23:56:53.686346       1 reflector.go:221] Starting reflector *v1.Secret (23h58m41.493209383s) from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:53.686359       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:53.686584       1 reflector.go:221] Starting reflector *v1.Node (23h58m41.493209383s) from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:53.686592       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:53.786760       1 shared_informer.go:285] caches populated
I0902 23:56:53.786789       1 shared_informer.go:262] Caches are synced for tokens
W0902 23:56:53.834693       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0902 23:56:53.834778       1 controllermanager.go:573] Starting "csrsigning"
I0902 23:56:53.880397       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0902 23:56:53.880676       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0902 23:56:53.880953       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0902 23:56:53.881613       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0902 23:56:53.881724       1 controllermanager.go:602] Started "csrsigning"
... skipping 24 lines ...
I0902 23:56:53.953098       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0902 23:56:53.953110       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0902 23:56:53.953118       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0902 23:56:53.953130       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0902 23:56:53.953138       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0902 23:56:53.953757       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0902 23:56:53.953858       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0902 23:56:53.953879       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0902 23:56:53.955643       1 controllermanager.go:602] Started "persistentvolume-binder"
I0902 23:56:53.955715       1 controllermanager.go:573] Starting "root-ca-cert-publisher"
I0902 23:56:53.957411       1 pv_controller_base.go:318] Starting persistent volume controller
I0902 23:56:53.957439       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0902 23:56:54.002623       1 controllermanager.go:602] Started "root-ca-cert-publisher"
... skipping 102 lines ...
I0902 23:56:56.390596       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0902 23:56:56.391022       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0902 23:56:56.391186       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0902 23:56:56.391330       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0902 23:56:56.391475       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0902 23:56:56.391500       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0902 23:56:56.391523       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0902 23:56:56.391538       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0902 23:56:56.391735       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-control-plane-r98ch"
W0902 23:56:56.391902       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-5ibqsb-control-plane-r98ch" does not exist
I0902 23:56:56.392075       1 controllermanager.go:602] Started "attachdetach"
I0902 23:56:56.392099       1 controllermanager.go:573] Starting "pv-protection"
I0902 23:56:56.392347       1 attach_detach_controller.go:328] Starting attach detach controller
I0902 23:56:56.392590       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0902 23:56:56.540076       1 controllermanager.go:602] Started "pv-protection"
I0902 23:56:56.540112       1 controllermanager.go:573] Starting "podgc"
... skipping 403 lines ...
I0902 23:56:58.752055       1 shared_informer.go:285] caches populated
I0902 23:56:58.752300       1 shared_informer.go:262] Caches are synced for garbage collector
I0902 23:56:58.752520       1 garbagecollector.go:263] synced garbage collector
I0902 23:56:58.763280       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 23:56:58.747547283 +0000 UTC m=+19.378167140 - now: 2022-09-02 23:56:58.763270109 +0000 UTC m=+19.393889966]
I0902 23:56:58.763913       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0902 23:56:58.767882       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="527.298216ms"
I0902 23:56:58.768123       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:56:58.768193       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-02 23:56:58.768155548 +0000 UTC m=+19.398775505"
I0902 23:56:58.768824       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 23:56:58 +0000 UTC - now: 2022-09-02 23:56:58.768816053 +0000 UTC m=+19.399436010]
I0902 23:56:58.773574       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="5.403943ms"
I0902 23:56:58.774025       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0902 23:56:58.774120       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-02 23:56:58.774069995 +0000 UTC m=+19.404689952"
I0902 23:56:58.774839       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 23:56:58 +0000 UTC - now: 2022-09-02 23:56:58.774832201 +0000 UTC m=+19.405452058]
... skipping 241 lines ...
I0902 23:57:23.221020       1 replica_set.go:394] Pod calico-kube-controllers-755ff8d7b5-ztk2t created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-kube-controllers-755ff8d7b5-ztk2t", GenerateName:"calico-kube-controllers-755ff8d7b5-", Namespace:"kube-system", SelfLink:"", UID:"4bbb941d-c7e5-42ee-8b71-c5cae774b0ed", ResourceVersion:"450", Generation:0, CreationTimestamp:time.Date(2022, time.September, 2, 23, 57, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"755ff8d7b5"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-kube-controllers-755ff8d7b5", UID:"86e81493-dbb8-45c8-a7ef-945abd1a9d94", Controller:(*bool)(0xc0009c774e), BlockOwnerDeletion:(*bool)(0xc0009c774f)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 2, 23, 57, 23, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0003ef1d0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-mlkd9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0007a7f40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-kube-controllers", Image:"docker.io/calico/kube-controllers:v3.23.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ENABLED_CONTROLLERS", Value:"node", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mlkd9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0010f3d40), ReadinessProbe:(*v1.Probe)(0xc0010f3d80), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0009c7830), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-kube-controllers", DeprecatedServiceAccount:"calico-kube-controllers", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003a9340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0009c7890)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0009c78b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0009c78b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0009c78bc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001636c30), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0902 23:57:23.223225       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bcc3f8cc3807a1, ext:43835618618, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:23.221400       1 disruption.go:479] addPod called on pod "calico-kube-controllers-755ff8d7b5-ztk2t"
I0902 23:57:23.224943       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-kube-controllers-755ff8d7b5-ztk2t, PodDisruptionBudget controller will avoid syncing.
I0902 23:57:23.225172       1 disruption.go:482] No matching pdb for pod "calico-kube-controllers-755ff8d7b5-ztk2t"
I0902 23:57:23.231437       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="47.317049ms"
I0902 23:57:23.231483       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:57:23.231544       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-02 23:57:23.23152116 +0000 UTC m=+43.862141117"
I0902 23:57:23.232331       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 23:57:23 +0000 UTC - now: 2022-09-02 23:57:23.232322382 +0000 UTC m=+43.862942239]
I0902 23:57:23.236229       1 node_lifecycle_controller.go:914] Node capz-5ibqsb-control-plane-r98ch is NotReady as of 2022-09-02 23:57:23.236212184 +0000 UTC m=+43.866832041. Adding it to the Taint queue.
I0902 23:57:23.240528       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-755ff8d7b5-ztk2t" podUID=4bbb941d-c7e5-42ee-8b71-c5cae774b0ed
I0902 23:57:23.243928       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0902 23:57:23.244027       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (39.933354ms)
... skipping 331 lines ...
I0902 23:57:44.431445       1 replica_set.go:457] Pod coredns-84994b8c4-m7wmg updated, objectMeta {Name:coredns-84994b8c4-m7wmg GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:f777c824-4f1c-45a9-8131-4782b7c25db6 ResourceVersion:518 Generation:0 CreationTimestamp:2022-09-02 23:56:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:263f78ae-21b7-4880-9d3e-c1a0db0e53c8 Controller:0xc0025aa620 BlockOwnerDeletion:0xc0025aa621}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:56:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"263f78ae-21b7-4880-9d3e-c1a0db0e53c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-02 23:56:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-84994b8c4-m7wmg GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:f777c824-4f1c-45a9-8131-4782b7c25db6 ResourceVersion:524 Generation:0 CreationTimestamp:2022-09-02 23:56:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:263f78ae-21b7-4880-9d3e-c1a0db0e53c8 Controller:0xc002538280 BlockOwnerDeletion:0xc002538281}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:56:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"263f78ae-21b7-4880-9d3e-c1a0db0e53c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-02 23:56:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-02 23:57:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0902 23:57:44.431730       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-84994b8c4", timestamp:time.Time{wall:0xc0bcc3f2acaaf342, ext:19380020855, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:44.431884       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-84994b8c4" (161.507µs)
I0902 23:57:44.432121       1 disruption.go:494] updatePod called on pod "coredns-84994b8c4-m7wmg"
I0902 23:57:44.432154       1 disruption.go:570] No PodDisruptionBudgets found for pod coredns-84994b8c4-m7wmg, PodDisruptionBudget controller will avoid syncing.
I0902 23:57:44.432161       1 disruption.go:497] No matching pdb for pod "coredns-84994b8c4-m7wmg"
I0902 23:57:48.240768       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-5ibqsb-control-plane-r98ch transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 23:57:03 +0000 UTC,LastTransitionTime:2022-09-02 23:56:25 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 23:57:44 +0000 UTC,LastTransitionTime:2022-09-02 23:57:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 23:57:48.240876       1 node_lifecycle_controller.go:1092] Node capz-5ibqsb-control-plane-r98ch ReadyCondition updated. Updating timestamp.
I0902 23:57:48.240912       1 node_lifecycle_controller.go:938] Node capz-5ibqsb-control-plane-r98ch is healthy again, removing all taints
I0902 23:57:48.240936       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0902 23:57:49.214358       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-wx647n" (12.1µs)
I0902 23:57:49.243809       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-yz9e8y" (16.1µs)
I0902 23:57:49.847236       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="122.203µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:40880" resp=200
... skipping 259 lines ...
I0902 23:59:12.558055       1 controller.go:690] Syncing backends for all LB services.
I0902 23:59:12.567045       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:59:12.567062       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:59:12.567069       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:59:12.567078       1 controller.go:686] It took 0.009023575 seconds to finish syncNodes
I0902 23:59:12.558064       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-mp-0000000"
W0902 23:59:12.567127       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-5ibqsb-mp-0000000" does not exist
I0902 23:59:12.558080       1 topologycache.go:179] Ignoring node capz-5ibqsb-control-plane-r98ch because it has an excluded label
I0902 23:59:12.567156       1 topologycache.go:183] Ignoring node capz-5ibqsb-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-5ibqsb-mp-0000000" not found]}]
I0902 23:59:12.567233       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0902 23:59:12.574805       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-mp-0000000"
I0902 23:59:12.575233       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-5ibqsb-mp-0000000" new_ttl="0s"
I0902 23:59:12.583781       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-node-5grhm" podUID=129a7999-0259-4cfd-9b63-5f0a4f0adc3f
I0902 23:59:12.584187       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/calico-node-5grhm"
I0902 23:59:12.584210       1 disruption.go:479] addPod called on pod "calico-node-5grhm"
... skipping 107 lines ...
I0902 23:59:16.789235       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0902 23:59:16.789318       1 daemon_controller.go:1119] Updating daemon set status
I0902 23:59:16.789499       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (4.376393ms)
I0902 23:59:17.179789       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-5ibqsb-mp-0000001}
I0902 23:59:17.180666       1 taint_manager.go:471] "Updating known taints on node" node="capz-5ibqsb-mp-0000001" taints=[]
I0902 23:59:17.180993       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-mp-0000001"
W0902 23:59:17.181166       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-5ibqsb-mp-0000001" does not exist
I0902 23:59:17.181343       1 controller.go:690] Syncing backends for all LB services.
I0902 23:59:17.181510       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:59:17.181655       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:59:17.181794       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:59:17.181968       1 controller.go:686] It took 0.000624328 seconds to finish syncNodes
I0902 23:59:17.182134       1 topologycache.go:179] Ignoring node capz-5ibqsb-control-plane-r98ch because it has an excluded label
I0902 23:59:17.184302       1 topologycache.go:183] Ignoring node capz-5ibqsb-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:59:12 +0000 UTC 2022-09-02 23:59:12 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-5ibqsb-mp-0000000" not found]}]
I0902 23:59:17.184481       1 topologycache.go:183] Ignoring node capz-5ibqsb-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-02 23:59:17 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:59:17 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:59:17 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:59:17 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-5ibqsb-mp-0000001" not found]}]
I0902 23:59:17.184656       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0902 23:59:17.182957       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcc4152cb1c1b3, ext:157380466921, loc:(*time.Location)(0x6f10040)}}
I0902 23:59:17.184997       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcc4154b06be3f, ext:157815611152, loc:(*time.Location)(0x6f10040)}}
I0902 23:59:17.185140       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-5ibqsb-mp-0000001], creating 1
I0902 23:59:17.186743       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcc4152f08e326, ext:157419731447, loc:(*time.Location)(0x6f10040)}}
I0902 23:59:17.187075       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcc4154b266e52, ext:157817687943, loc:(*time.Location)(0x6f10040)}}
... skipping 336 lines ...
I0902 23:59:43.146866       1 controller.go:690] Syncing backends for all LB services.
I0902 23:59:43.147801       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:59:43.147822       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:59:43.147854       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:59:43.147868       1 controller.go:686] It took 0.001026841 seconds to finish syncNodes
I0902 23:59:43.146914       1 topologycache.go:179] Ignoring node capz-5ibqsb-control-plane-r98ch because it has an excluded label
I0902 23:59:43.147907       1 topologycache.go:183] Ignoring node capz-5ibqsb-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-02 23:59:37 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:59:37 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:59:37 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:59:37 +0000 UTC 2022-09-02 23:59:17 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0902 23:59:43.151156       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0902 23:59:43.159694       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-5ibqsb-mp-0000000" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0902 23:59:43.162802       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-mp-0000000"
I0902 23:59:43.220103       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 23:59:43.259806       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-5ibqsb-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 23:59:32 +0000 UTC,LastTransitionTime:2022-09-02 23:59:12 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 23:59:43 +0000 UTC,LastTransitionTime:2022-09-02 23:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 23:59:43.259886       1 node_lifecycle_controller.go:1092] Node capz-5ibqsb-mp-0000000 ReadyCondition updated. Updating timestamp.
I0902 23:59:43.266637       1 pv_controller_base.go:612] resyncing PV controller
I0902 23:59:43.273921       1 node_lifecycle_controller.go:938] Node capz-5ibqsb-mp-0000000 is healthy again, removing all taints
I0902 23:59:43.273974       1 node_lifecycle_controller.go:1259] Controller detected that zone westus2::0 is now in state Normal.
I0902 23:59:43.274311       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-5ibqsb-mp-0000000}
I0902 23:59:43.274346       1 taint_manager.go:471] "Updating known taints on node" node="capz-5ibqsb-mp-0000000" taints=[]
... skipping 50 lines ...
I0902 23:59:47.570046       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:59:47.570222       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:59:47.570406       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:59:47.570572       1 controller.go:686] It took 0.001722468 seconds to finish syncNodes
I0902 23:59:47.583904       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-mp-0000001"
I0902 23:59:47.584417       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-5ibqsb-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0902 23:59:48.275062       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-5ibqsb-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 23:59:37 +0000 UTC,LastTransitionTime:2022-09-02 23:59:17 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 23:59:47 +0000 UTC,LastTransitionTime:2022-09-02 23:59:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 23:59:48.275563       1 node_lifecycle_controller.go:1092] Node capz-5ibqsb-mp-0000001 ReadyCondition updated. Updating timestamp.
I0902 23:59:48.291696       1 node_lifecycle_controller.go:938] Node capz-5ibqsb-mp-0000001 is healthy again, removing all taints
I0902 23:59:48.292753       1 node_lifecycle_controller.go:1259] Controller detected that zone westus2::1 is now in state Normal.
I0902 23:59:48.292093       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-5ibqsb-mp-0000001}
I0902 23:59:48.292159       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-5ibqsb-mp-0000001"
I0902 23:59:48.293502       1 taint_manager.go:471] "Updating known taints on node" node="capz-5ibqsb-mp-0000001" taints=[]
... skipping 52 lines ...
I0902 23:59:52.872675       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7847f46f86 to 2"
I0902 23:59:52.884920       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0902 23:59:52.886481       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-tq6vt
I0902 23:59:52.887113       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-02 23:59:52.872218967 +0000 UTC m=+193.502838924 - now: 2022-09-02 23:59:52.887103252 +0000 UTC m=+193.517723109]
I0902 23:59:52.887420       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-tq6vt"
I0902 23:59:52.887469       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-tq6vt" podUID=6627a42a-480e-4fd9-b70e-e44120281eb9
I0902 23:59:52.887488       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-tq6vt created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-tq6vt", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"6627a42a-480e-4fd9-b70e-e44120281eb9", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2022, time.September, 2, 23, 59, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"20c66534-ace5-4006-8e5d-5483e64acd90", Controller:(*bool)(0xc0027d7f97), BlockOwnerDeletion:(*bool)(0xc0027d7f98)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 2, 23, 59, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00278df50), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00278df68), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00278df80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-7x8j7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0024cd0c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7x8j7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7x8j7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7x8j7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7x8j7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7x8j7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0024cd1e0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7x8j7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002bf4140), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002848340), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003fb260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028483b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028483d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0028483d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028483dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0029c3d60), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0902 23:59:52.890106       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bcc41e33eab144, ext:193501638777, loc:(*time.Location)(0x6f10040)}}
I0902 23:59:52.888112       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-tq6vt"
I0902 23:59:52.889000       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-tq6vt"
I0902 23:59:52.890699       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-tq6vt, PodDisruptionBudget controller will avoid syncing.
I0902 23:59:52.891002       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-tq6vt"
I0902 23:59:52.900824       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-tq6vt"
... skipping 3 lines ...
I0902 23:59:52.904007       1 disruption.go:497] No matching pdb for pod "csi-azurefile-controller-7847f46f86-tq6vt"
I0902 23:59:52.912299       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-xqs9p
I0902 23:59:52.912652       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-7847f46f86, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0902 23:59:52.913516       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-xqs9p"
I0902 23:59:52.921314       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-xqs9p"
I0902 23:59:52.925464       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-xqs9p" podUID=5a5d8944-308c-48df-93c0-1f1d2bb517ab
I0902 23:59:52.925837       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-xqs9p created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-xqs9p", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"5a5d8944-308c-48df-93c0-1f1d2bb517ab", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2022, time.September, 2, 23, 59, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"20c66534-ace5-4006-8e5d-5483e64acd90", Controller:(*bool)(0xc0028d4127), BlockOwnerDeletion:(*bool)(0xc0028d4128)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 2, 23, 59, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00271aab0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00271aac8), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00271aae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-tgqgf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0024cd6e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tgqgf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tgqgf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tgqgf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tgqgf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tgqgf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0024cd800)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tgqgf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002bf4fc0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028d4760), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003fb880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028d47d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028d47f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0028d47f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028d47fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002c194e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0902 23:59:52.926766       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bcc41e33eab144, ext:193501638777, loc:(*time.Location)(0x6f10040)}}
I0902 23:59:52.927051       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-xqs9p"
I0902 23:59:52.927287       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-xqs9p, PodDisruptionBudget controller will avoid syncing.
I0902 23:59:52.927493       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-xqs9p"
I0902 23:59:52.932970       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="68.694698ms"
I0902 23:59:52.933258       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:59:52.933532       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-02 23:59:52.933506774 +0000 UTC m=+193.564126731"
I0902 23:59:52.936099       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-02 23:59:52 +0000 UTC - now: 2022-09-02 23:59:52.936090375 +0000 UTC m=+193.566710232]
I0902 23:59:52.949497       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-azurefile-controller-7847f46f86"
I0902 23:59:52.950669       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-xqs9p"
I0902 23:59:52.957818       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-azurefile-controller-7847f46f86" (86.940914ms)
I0902 23:59:52.958416       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bcc41e33eab144, ext:193501638777, loc:(*time.Location)(0x6f10040)}}
... skipping 1510 lines ...
I0903 00:04:43.316218       1 controller_utils.go:581] Controller azurefile-volume-tester-9p6hn-788d97fc5d created pod azurefile-volume-tester-9p6hn-788d97fc5d-7hss4
I0903 00:04:43.316476       1 replica_set_utils.go:59] Updating status for : azurefile-5356/azurefile-volume-tester-9p6hn-788d97fc5d, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0903 00:04:43.317388       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="azurefile-5356/azurefile-volume-tester-9p6hn-788d97fc5d-7hss4" podUID=6829fda4-9d4c-4fd6-b9ea-c60e65ed0ed4
I0903 00:04:43.317653       1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-5356/pvc-tl444"
I0903 00:04:43.317833       1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-5356/pvc-tl444" duration="7.9µs"
I0903 00:04:43.318468       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-9p6hn" duration="28.16562ms"
I0903 00:04:43.320354       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-9p6hn" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-9p6hn\": the object has been modified; please apply your changes to the latest version and try again"
I0903 00:04:43.320848       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-9p6hn" startTime="2022-09-03 00:04:43.320644093 +0000 UTC m=+483.951263950"
I0903 00:04:43.322958       1 deployment_util.go:775] Deployment "azurefile-volume-tester-9p6hn" timed out (false) [last progress check: 2022-09-03 00:04:43 +0000 UTC - now: 2022-09-03 00:04:43.322936384 +0000 UTC m=+483.953556341]
I0903 00:04:43.323269       1 event.go:294] "Event occurred" object="azurefile-5356/azurefile-volume-tester-9p6hn-788d97fc5d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azurefile-volume-tester-9p6hn-788d97fc5d-7hss4"
I0903 00:04:43.319729       1 disruption.go:479] addPod called on pod "azurefile-volume-tester-9p6hn-788d97fc5d-7hss4"
I0903 00:04:43.329897       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-9p6hn-788d97fc5d-7hss4, PodDisruptionBudget controller will avoid syncing.
I0903 00:04:43.330059       1 disruption.go:482] No matching pdb for pod "azurefile-volume-tester-9p6hn-788d97fc5d-7hss4"
... skipping 1263 lines ...
I0903 00:07:35.945560       1 namespace_controller.go:180] Finished syncing namespace "azurefile-7578" (94.103µs)
2022/09/03 00:07:36 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 331.220 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 41 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-w488g, container manager
STEP: Dumping workload cluster default/capz-5ibqsb logs
Sep  3 00:09:09.895: INFO: Collecting logs for Linux node capz-5ibqsb-control-plane-r98ch in cluster capz-5ibqsb in namespace default

Sep  3 00:10:09.897: INFO: Collecting boot logs for AzureMachine capz-5ibqsb-control-plane-r98ch

Failed to get logs for machine capz-5ibqsb-control-plane-pf7rm, cluster default/capz-5ibqsb: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 00:10:10.835: INFO: Collecting logs for Linux node capz-5ibqsb-mp-0000000 in cluster capz-5ibqsb in namespace default

Sep  3 00:11:10.837: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-5ibqsb-mp-0

Sep  3 00:11:11.202: INFO: Collecting logs for Linux node capz-5ibqsb-mp-0000001 in cluster capz-5ibqsb in namespace default

Sep  3 00:12:11.206: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-5ibqsb-mp-0

Failed to get logs for machine pool capz-5ibqsb-mp-0, cluster default/capz-5ibqsb: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-5ibqsb kube-system pod logs
STEP: Fetching kube-system pod logs took 734.607582ms
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-tq6vt, container csi-resizer
STEP: Creating log watcher for controller kube-system/kube-proxy-p5d5w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-5ibqsb-control-plane-r98ch, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-7khfw, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-p5d5w
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-52rlh, container coredns
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-tq6vt
STEP: Collecting events for Pod kube-system/kube-proxy-7khfw
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-5ibqsb-control-plane-r98ch
STEP: failed to find events of Pod "kube-controller-manager-capz-5ibqsb-control-plane-r98ch"
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-tq6vt, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-tq6vt, container azurefile
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-ztk2t, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-kube-controllers-755ff8d7b5-ztk2t
STEP: Collecting events for Pod kube-system/calico-node-5grhm
STEP: Creating log watcher for controller kube-system/calico-node-fch9r, container calico-node
... skipping 15 lines ...
STEP: Collecting events for Pod kube-system/csi-azurefile-node-h6wrm
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-xqs9p, container csi-resizer
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-xqs9p, container liveness-probe
STEP: Creating log watcher for controller kube-system/etcd-capz-5ibqsb-control-plane-r98ch, container etcd
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-xqs9p, container azurefile
STEP: Collecting events for Pod kube-system/etcd-capz-5ibqsb-control-plane-r98ch
STEP: failed to find events of Pod "etcd-capz-5ibqsb-control-plane-r98ch"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-5ibqsb-control-plane-r98ch, container kube-apiserver
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-xqs9p
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-2kj76, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-2kj76, container node-driver-registrar
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-5ibqsb-control-plane-r98ch
STEP: failed to find events of Pod "kube-apiserver-capz-5ibqsb-control-plane-r98ch"
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-2kj76, container azurefile
STEP: Collecting events for Pod kube-system/csi-azurefile-node-2kj76
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-gnt64, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-gnt64, container azurefile
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-gnt64, container liveness-probe
STEP: Collecting events for Pod kube-system/csi-azurefile-node-gnt64
STEP: Dumping workload cluster default/capz-5ibqsb Azure activity log
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-tq6vt, container liveness-probe
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-5ibqsb-control-plane-r98ch
STEP: failed to find events of Pod "kube-scheduler-capz-5ibqsb-control-plane-r98ch"
STEP: Creating log watcher for controller kube-system/kube-proxy-q79wx, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-q79wx
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-xqs9p, container csi-snapshotter
STEP: Fetching activity logs took 4.683110235s
================ REDACTING LOGS ================
All sensitive variables are redacted
... skipping 15 lines ...