This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-06 09:23
Elapsed33m58s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 701 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 141 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-qkpsgb-kubeconfig; do sleep 1; done"
capz-qkpsgb-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-qkpsgb-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-qkpsgb-control-plane-qdcbm   NotReady   control-plane   7s    v1.26.0-alpha.0.380+67bde9a1023d18
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-qkpsgb-control-plane-qdcbm condition met
node/capz-qkpsgb-md-0-49d8b condition met
... skipping 63 lines ...
Pre-Provisioned 
  should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77
STEP: Creating a kubernetes client
Sep  6 09:41:34.690: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Sep  6 09:41:34.971: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/06 09:41:35 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/06 09:41:35 Check successfully
... skipping 179 lines ...
Sep  6 09:41:59.229: INFO: PersistentVolumeClaim pvc-hfmpw found but phase is Pending instead of Bound.
Sep  6 09:42:01.263: INFO: PersistentVolumeClaim pvc-hfmpw found and phase=Bound (22.412112606s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  6 09:42:01.366: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-6mdcl" in namespace "azurefile-5194" to be "Succeeded or Failed"
Sep  6 09:42:01.400: INFO: Pod "azurefile-volume-tester-6mdcl": Phase="Pending", Reason="", readiness=false. Elapsed: 33.469982ms
Sep  6 09:42:03.436: INFO: Pod "azurefile-volume-tester-6mdcl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06958233s
Sep  6 09:42:05.472: INFO: Pod "azurefile-volume-tester-6mdcl": Phase="Running", Reason="", readiness=true. Elapsed: 4.105628006s
Sep  6 09:42:07.508: INFO: Pod "azurefile-volume-tester-6mdcl": Phase="Running", Reason="", readiness=false. Elapsed: 6.141790044s
Sep  6 09:42:09.544: INFO: Pod "azurefile-volume-tester-6mdcl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.178030674s
STEP: Saw pod success
Sep  6 09:42:09.544: INFO: Pod "azurefile-volume-tester-6mdcl" satisfied condition "Succeeded or Failed"
Sep  6 09:42:09.544: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-6mdcl"
Sep  6 09:42:09.600: INFO: Pod azurefile-volume-tester-6mdcl has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-6mdcl in namespace azurefile-5194
Sep  6 09:42:09.643: INFO: deleting PVC "azurefile-5194"/"pvc-hfmpw"
Sep  6 09:42:09.643: INFO: Deleting PersistentVolumeClaim "pvc-hfmpw"
... skipping 156 lines ...
Sep  6 09:43:58.768: INFO: PersistentVolumeClaim pvc-77d9g found but phase is Pending instead of Bound.
Sep  6 09:44:00.802: INFO: PersistentVolumeClaim pvc-77d9g found and phase=Bound (22.418180448s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  6 09:44:00.904: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-krwn5" in namespace "azurefile-156" to be "Error status code"
Sep  6 09:44:00.938: INFO: Pod "azurefile-volume-tester-krwn5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.561161ms
Sep  6 09:44:02.974: INFO: Pod "azurefile-volume-tester-krwn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069283838s
Sep  6 09:44:05.010: INFO: Pod "azurefile-volume-tester-krwn5": Phase="Failed", Reason="", readiness=false. Elapsed: 4.105179211s
STEP: Saw pod failure
Sep  6 09:44:05.010: INFO: Pod "azurefile-volume-tester-krwn5" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  6 09:44:05.048: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-krwn5"
Sep  6 09:44:05.086: INFO: Pod azurefile-volume-tester-krwn5 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-krwn5 in namespace azurefile-156
Sep  6 09:44:05.127: INFO: deleting PVC "azurefile-156"/"pvc-77d9g"
... skipping 181 lines ...
Sep  6 09:45:58.680: INFO: PersistentVolumeClaim pvc-xzvfm found but phase is Pending instead of Bound.
Sep  6 09:46:00.714: INFO: PersistentVolumeClaim pvc-xzvfm found and phase=Bound (2.068051761s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  6 09:46:00.818: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-ncrcc" in namespace "azurefile-2546" to be "Succeeded or Failed"
Sep  6 09:46:00.851: INFO: Pod "azurefile-volume-tester-ncrcc": Phase="Pending", Reason="", readiness=false. Elapsed: 33.280637ms
Sep  6 09:46:02.898: INFO: Pod "azurefile-volume-tester-ncrcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08015725s
Sep  6 09:46:04.934: INFO: Pod "azurefile-volume-tester-ncrcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116427367s
STEP: Saw pod success
Sep  6 09:46:04.934: INFO: Pod "azurefile-volume-tester-ncrcc" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  6 09:46:35.753: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-ncrcc"
... skipping 728 lines ...
I0906 09:37:17.968120       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662457037\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662457036\" (2022-09-06 08:37:15 +0000 UTC to 2023-09-06 08:37:15 +0000 UTC (now=2022-09-06 09:37:17.968095109 +0000 UTC))"
I0906 09:37:17.968386       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662457037\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662457037\" (2022-09-06 08:37:17 +0000 UTC to 2023-09-06 08:37:17 +0000 UTC (now=2022-09-06 09:37:17.968360809 +0000 UTC))"
I0906 09:37:17.968479       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0906 09:37:17.968818       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0906 09:37:17.969255       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0906 09:37:17.969488       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0906 09:37:22.969706       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0906 09:37:22.969739       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0906 09:37:25.664994       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="65.8µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:48654" resp=200
E0906 09:37:28.760359       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0906 09:37:28.760384       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0906 09:37:30.234127       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="119.401µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:48850" resp=200
I0906 09:37:32.133853       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0906 09:37:32.134272       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-qkpsgb-control-plane-qdcbm_24aed8aa-ded6-4857-97f1-a056070f9d7d became leader"
W0906 09:37:32.167557       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0906 09:37:32.168161       1 azure_auth.go:232] Using AzurePublicCloud environment
I0906 09:37:32.168211       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
... skipping 30 lines ...
I0906 09:37:32.169375       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0906 09:37:32.169411       1 reflector.go:221] Starting reflector *v1.ServiceAccount (22h44m8.09005074s) from vendor/k8s.io/client-go/informers/factory.go:134
I0906 09:37:32.169419       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0906 09:37:32.169531       1 shared_informer.go:255] Waiting for caches to sync for tokens
I0906 09:37:32.169582       1 reflector.go:221] Starting reflector *v1.Secret (22h44m8.09005074s) from vendor/k8s.io/client-go/informers/factory.go:134
I0906 09:37:32.169590       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
W0906 09:37:32.187407       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0906 09:37:32.187432       1 controllermanager.go:573] Starting "csrapproving"
I0906 09:37:32.192797       1 controllermanager.go:602] Started "csrapproving"
I0906 09:37:32.192821       1 controllermanager.go:573] Starting "ttl"
I0906 09:37:32.192958       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
I0906 09:37:32.192974       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
I0906 09:37:32.197880       1 controllermanager.go:602] Started "ttl"
... skipping 7 lines ...
I0906 09:37:32.222723       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 09:37:32.222736       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0906 09:37:32.222749       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0906 09:37:32.222784       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0906 09:37:32.222796       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0906 09:37:32.222808       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0906 09:37:32.222846       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 09:37:32.222875       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0906 09:37:32.223050       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-control-plane-qdcbm"
W0906 09:37:32.223330       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-qkpsgb-control-plane-qdcbm" does not exist
I0906 09:37:32.235280       1 controllermanager.go:602] Started "attachdetach"
I0906 09:37:32.235309       1 controllermanager.go:573] Starting "podgc"
I0906 09:37:32.235527       1 attach_detach_controller.go:328] Starting attach detach controller
I0906 09:37:32.235539       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0906 09:37:32.235768       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-control-plane-qdcbm"
I0906 09:37:32.248840       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-qkpsgb-control-plane-qdcbm" new_ttl="0s"
... skipping 75 lines ...
I0906 09:37:32.411298       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0906 09:37:32.411311       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0906 09:37:32.411328       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 09:37:32.411342       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0906 09:37:32.411353       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0906 09:37:32.411365       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0906 09:37:32.411384       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 09:37:32.411393       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0906 09:37:32.411458       1 controllermanager.go:602] Started "persistentvolume-binder"
I0906 09:37:32.411471       1 controllermanager.go:573] Starting "endpoint"
I0906 09:37:32.411558       1 pv_controller_base.go:318] Starting persistent volume controller
I0906 09:37:32.411568       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0906 09:37:32.480111       1 request.go:614] Waited for 68.578635ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/endpoint-controller
... skipping 496 lines ...
I0906 09:37:37.548970       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bde31860b87f55, ext:21729074991, loc:(*time.Location)(0x6f10040)}}
I0906 09:37:37.549159       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-qkpsgb-control-plane-qdcbm], creating 1
I0906 09:37:37.549601       1 disruption.go:494] updatePod called on pod "kube-controller-manager-capz-qkpsgb-control-plane-qdcbm"
I0906 09:37:37.549779       1 disruption.go:570] No PodDisruptionBudgets found for pod kube-controller-manager-capz-qkpsgb-control-plane-qdcbm, PodDisruptionBudget controller will avoid syncing.
I0906 09:37:37.549896       1 disruption.go:497] No matching pdb for pod "kube-controller-manager-capz-qkpsgb-control-plane-qdcbm"
I0906 09:37:37.550233       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="661.499683ms"
I0906 09:37:37.550381       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:37:37.550515       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 09:37:37.55049724 +0000 UTC m=+21.730610098"
I0906 09:37:37.551567       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 09:37:37 +0000 UTC - now: 2022-09-06 09:37:37.551559144 +0000 UTC m=+21.731672002]
I0906 09:37:37.556588       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="6.077225ms"
I0906 09:37:37.556627       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 09:37:37.556613365 +0000 UTC m=+21.736726223"
I0906 09:37:37.556848       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0906 09:37:37.557260       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 09:37:37 +0000 UTC - now: 2022-09-06 09:37:37.557255068 +0000 UTC m=+21.737367926]
... skipping 10 lines ...
I0906 09:37:37.562487       1 daemon_controller.go:1036] Pods to delete for daemon set kube-proxy: [], deleting 0
I0906 09:37:37.562615       1 daemon_controller.go:1119] Updating daemon set status
I0906 09:37:37.563024       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kb5ft"
I0906 09:37:37.563053       1 disruption.go:570] No PodDisruptionBudgets found for pod kube-proxy-kb5ft, PodDisruptionBudget controller will avoid syncing.
I0906 09:37:37.563076       1 disruption.go:482] No matching pdb for pod "kube-proxy-kb5ft"
I0906 09:37:37.565791       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="9.166539ms"
I0906 09:37:37.565984       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:37:37.566148       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 09:37:37.566107705 +0000 UTC m=+21.746220663"
I0906 09:37:37.567992       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/kube-proxy-kb5ft"
I0906 09:37:37.568011       1 disruption.go:494] updatePod called on pod "kube-proxy-kb5ft"
I0906 09:37:37.568027       1 disruption.go:570] No PodDisruptionBudgets found for pod kube-proxy-kb5ft, PodDisruptionBudget controller will avoid syncing.
I0906 09:37:37.568032       1 disruption.go:497] No matching pdb for pod "kube-proxy-kb5ft"
I0906 09:37:37.568048       1 daemon_controller.go:577] Pod kube-proxy-kb5ft updated.
... skipping 247 lines ...
I0906 09:37:57.346546       1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (111.2µs)
I0906 09:37:57.346576       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/metrics-server-76f7667fbf-8pldm"
I0906 09:37:57.346585       1 disruption.go:479] addPod called on pod "metrics-server-76f7667fbf-8pldm"
I0906 09:37:57.347897       1 disruption.go:570] No PodDisruptionBudgets found for pod metrics-server-76f7667fbf-8pldm, PodDisruptionBudget controller will avoid syncing.
I0906 09:37:57.348037       1 disruption.go:482] No matching pdb for pod "metrics-server-76f7667fbf-8pldm"
I0906 09:37:57.350526       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="35.665833ms"
I0906 09:37:57.350718       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:37:57.350896       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-06 09:37:57.350878677 +0000 UTC m=+41.530991535"
I0906 09:37:57.351538       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-06 09:37:57 +0000 UTC - now: 2022-09-06 09:37:57.351530776 +0000 UTC m=+41.531643734]
I0906 09:37:57.354604       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/metrics-server-76f7667fbf"
I0906 09:37:57.363612       1 replica_set.go:457] Pod metrics-server-76f7667fbf-8pldm updated, objectMeta {Name:metrics-server-76f7667fbf-8pldm GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:a47b9df4-423f-4c19-bb0c-0608cca2386d ResourceVersion:409 Generation:0 CreationTimestamp:2022-09-06 09:37:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:3520f51c-87b4-4e7d-bdae-573b8918013e Controller:0xc00209c25e BlockOwnerDeletion:0xc00209c25f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3520f51c-87b4-4e7d-bdae-573b8918013e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:metrics-server-76f7667fbf-8pldm GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:a47b9df4-423f-4c19-bb0c-0608cca2386d ResourceVersion:415 Generation:0 CreationTimestamp:2022-09-06 09:37:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:3520f51c-87b4-4e7d-bdae-573b8918013e Controller:0xc00203262e BlockOwnerDeletion:0xc00203262f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3520f51c-87b4-4e7d-bdae-573b8918013e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0906 09:37:57.363947       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/metrics-server-76f7667fbf-8pldm" podUID=a47b9df4-423f-4c19-bb0c-0608cca2386d
I0906 09:37:57.363988       1 disruption.go:494] updatePod called on pod "metrics-server-76f7667fbf-8pldm"
... skipping 40 lines ...
I0906 09:37:59.092583       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-755ff8d7b5-h5rsm" podUID=32d29c5b-a528-4478-8425-3f0ff666b0e0
I0906 09:37:59.092943       1 controller_utils.go:581] Controller calico-kube-controllers-755ff8d7b5 created pod calico-kube-controllers-755ff8d7b5-h5rsm
I0906 09:37:59.092995       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0906 09:37:59.093316       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-755ff8d7b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-755ff8d7b5-h5rsm"
I0906 09:37:59.096633       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-755ff8d7b5"
I0906 09:37:59.100459       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="36.324327ms"
I0906 09:37:59.100774       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:37:59.100940       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 09:37:59.100923308 +0000 UTC m=+43.281036266"
I0906 09:37:59.101520       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-06 09:37:59 +0000 UTC - now: 2022-09-06 09:37:59.101512607 +0000 UTC m=+43.281625465]
I0906 09:37:59.100739       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (18.209763ms)
I0906 09:37:59.102003       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bde31dc4ee834c, ext:43262853002, loc:(*time.Location)(0x6f10040)}}
I0906 09:37:59.102210       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0906 09:37:59.107154       1 disruption.go:448] add DB "calico-kube-controllers"
... skipping 136 lines ...
I0906 09:38:06.880260       1 reflector.go:221] Starting reflector *v1.PartialObjectMetadata (14h58m26.416501114s) from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0906 09:38:06.884191       1 reflector.go:257] Listing and watching *v1.PartialObjectMetadata from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0906 09:38:06.894323       1 node_lifecycle_controller.go:914] Node capz-qkpsgb-control-plane-qdcbm is NotReady as of 2022-09-06 09:38:06.894306622 +0000 UTC m=+51.074419480. Adding it to the Taint queue.
I0906 09:38:06.980236       1 shared_informer.go:285] caches populated
I0906 09:38:06.980259       1 shared_informer.go:262] Caches are synced for resource quota
I0906 09:38:06.980268       1 resource_quota_controller.go:462] synced quota controller
W0906 09:38:07.211324       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0906 09:38:07.211651       1 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0906 09:38:07.211665       1 garbagecollector.go:226] reset restmapper
E0906 09:38:07.220461       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0906 09:38:07.221262       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0906 09:38:07.222907       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations", kind "crd.projectcalico.org/v1, Kind=KubeControllersConfiguration"
I0906 09:38:07.222986       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=ipamconfigs", kind "crd.projectcalico.org/v1, Kind=IPAMConfig"
... skipping 164 lines ...
I0906 09:38:14.539311       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (121.799µs)
I0906 09:38:15.562751       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-awkk0y" (14.7µs)
I0906 09:38:15.678250       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="117.999µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:44692" resp=200
I0906 09:38:15.702824       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-5yu4n1" (14.2µs)
I0906 09:38:16.870023       1 gc_controller.go:221] GC'ing orphaned
I0906 09:38:16.870047       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0906 09:38:16.896552       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-qkpsgb-control-plane-qdcbm transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 09:37:33 +0000 UTC,LastTransitionTime:2022-09-06 09:37:03 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 09:38:14 +0000 UTC,LastTransitionTime:2022-09-06 09:38:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 09:38:16.896657       1 node_lifecycle_controller.go:1092] Node capz-qkpsgb-control-plane-qdcbm ReadyCondition updated. Updating timestamp.
I0906 09:38:16.896684       1 node_lifecycle_controller.go:938] Node capz-qkpsgb-control-plane-qdcbm is healthy again, removing all taints
I0906 09:38:16.896701       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0906 09:38:18.423574       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-control-plane-qdcbm"
I0906 09:38:18.460374       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-control-plane-qdcbm"
I0906 09:38:18.522871       1 disruption.go:494] updatePod called on pod "calico-node-fgkvr"
... skipping 175 lines ...
I0906 09:38:37.617012       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 09:38:37.616940627 +0000 UTC m=+81.797053585"
I0906 09:38:37.617077       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (278.703µs)
I0906 09:38:37.621541       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0906 09:38:37.621873       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="4.919945ms"
I0906 09:38:37.622009       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 09:38:37.621972573 +0000 UTC m=+81.802085531"
I0906 09:38:37.622471       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="487.604µs"
W0906 09:38:37.641012       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0906 09:38:44.410463       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-awkk0y" (11.5µs)
I0906 09:38:44.648511       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-5yu4n1" (13.4µs)
I0906 09:38:45.672964       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="96.801µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:49082" resp=200
I0906 09:38:51.671209       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 09:38:51.814787       1 pv_controller_base.go:612] resyncing PV controller
I0906 09:38:54.879250       1 replica_set.go:457] Pod metrics-server-76f7667fbf-8pldm updated, objectMeta {Name:metrics-server-76f7667fbf-8pldm GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:a47b9df4-423f-4c19-bb0c-0608cca2386d ResourceVersion:634 Generation:0 CreationTimestamp:2022-09-06 09:37:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[cni.projectcalico.org/containerID:866c55943175f0f795bafca186efd3fc944a9296ccaaf26322769c4bd02491b4 cni.projectcalico.org/podIP:192.168.70.195/32 cni.projectcalico.org/podIPs:192.168.70.195/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:3520f51c-87b4-4e7d-bdae-573b8918013e Controller:0xc00229c817 BlockOwnerDeletion:0xc00229c818}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3520f51c-87b4-4e7d-bdae-573b8918013e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-06 09:38:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-06 09:38:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.70.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-76f7667fbf-8pldm GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:a47b9df4-423f-4c19-bb0c-0608cca2386d ResourceVersion:687 Generation:0 CreationTimestamp:2022-09-06 09:37:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[cni.projectcalico.org/containerID:866c55943175f0f795bafca186efd3fc944a9296ccaaf26322769c4bd02491b4 cni.projectcalico.org/podIP:192.168.70.195/32 cni.projectcalico.org/podIPs:192.168.70.195/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:3520f51c-87b4-4e7d-bdae-573b8918013e Controller:0xc00271455e BlockOwnerDeletion:0xc00271455f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3520f51c-87b4-4e7d-bdae-573b8918013e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-06 09:37:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-06 09:38:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-06 09:38:54 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.70.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
... skipping 91 lines ...
I0906 09:39:09.792869       1 controller.go:690] Syncing backends for all LB services.
I0906 09:39:09.793040       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 09:39:09.793156       1 controller.go:753] Finished updateLoadBalancerHosts
I0906 09:39:09.793256       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0906 09:39:09.793377       1 controller.go:686] It took 0.0005127 seconds to finish syncNodes
I0906 09:39:09.793505       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-md-0-xbzpj"
W0906 09:39:09.793606       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-qkpsgb-md-0-xbzpj" does not exist
I0906 09:39:09.793719       1 topologycache.go:183] Ignoring node capz-qkpsgb-md-0-xbzpj because it is not ready: [{MemoryPressure False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0906 09:39:09.793870       1 topologycache.go:179] Ignoring node capz-qkpsgb-control-plane-qdcbm because it has an excluded label
I0906 09:39:09.793968       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0906 09:39:09.793935       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qkpsgb-md-0-xbzpj}
I0906 09:39:09.794149       1 taint_manager.go:471] "Updating known taints on node" node="capz-qkpsgb-md-0-xbzpj" taints=[]
I0906 09:39:09.795419       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde3257203e8fd, ext:74019229911, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:09.797020       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde32f6f817489, ext:113977126087, loc:(*time.Location)(0x6f10040)}}
... skipping 131 lines ...
I0906 09:39:15.300808       1 controller.go:690] Syncing backends for all LB services.
I0906 09:39:15.300873       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 09:39:15.300901       1 controller.go:753] Finished updateLoadBalancerHosts
I0906 09:39:15.300921       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0906 09:39:15.300945       1 controller.go:686] It took 0.000138 seconds to finish syncNodes
I0906 09:39:15.300808       1 topologycache.go:179] Ignoring node capz-qkpsgb-control-plane-qdcbm because it has an excluded label
I0906 09:39:15.300997       1 topologycache.go:183] Ignoring node capz-qkpsgb-md-0-xbzpj because it is not ready: [{MemoryPressure False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 09:39:09 +0000 UTC 2022-09-06 09:39:09 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0906 09:39:15.301049       1 topologycache.go:183] Ignoring node capz-qkpsgb-md-0-49d8b because it is not ready: [{MemoryPressure False 2022-09-06 09:39:15 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 09:39:15 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 09:39:15 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 09:39:15 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-qkpsgb-md-0-49d8b" not found]}]
I0906 09:39:15.301092       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0906 09:39:15.301161       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qkpsgb-md-0-49d8b}
I0906 09:39:15.301192       1 taint_manager.go:471] "Updating known taints on node" node="capz-qkpsgb-md-0-49d8b" taints=[]
I0906 09:39:15.302088       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-md-0-49d8b"
W0906 09:39:15.302113       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-qkpsgb-md-0-49d8b" does not exist
I0906 09:39:15.302543       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde3301d3f10f4, ext:116670785230, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:15.302674       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde330d20a3b87, ext:119482773345, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:15.302572       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bde32fe8b61145, ext:115863133571, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:15.305573       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bde330d23698f4, ext:119485680946, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:15.305675       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-qkpsgb-md-0-49d8b], creating 1
I0906 09:39:15.305480       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-qkpsgb-md-0-49d8b], creating 1
... skipping 333 lines ...
I0906 09:39:40.388569       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-qkpsgb/providers/Microsoft.Compute/virtualMachines/capz-qkpsgb-md-0-49d8b), assuming it is managed by availability set: not a vmss instance
I0906 09:39:40.388681       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-qkpsgb/providers/Microsoft.Compute/virtualMachines/capz-qkpsgb-md-0-49d8b), assuming it is managed by availability set: not a vmss instance
I0906 09:39:40.388800       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-qkpsgb-md-0-49d8b"
I0906 09:39:40.388819       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Succeeded" for node "capz-qkpsgb-md-0-49d8b"
I0906 09:39:40.526896       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-md-0-xbzpj"
I0906 09:39:40.528041       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-qkpsgb-md-0-xbzpj"
I0906 09:39:40.528156       1 topologycache.go:183] Ignoring node capz-qkpsgb-md-0-49d8b because it is not ready: [{MemoryPressure False 2022-09-06 09:39:35 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 09:39:35 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 09:39:35 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 09:39:35 +0000 UTC 2022-09-06 09:39:15 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0906 09:39:40.528945       1 topologycache.go:179] Ignoring node capz-qkpsgb-control-plane-qdcbm because it has an excluded label
I0906 09:39:40.529129       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0906 09:39:40.528302       1 controller.go:690] Syncing backends for all LB services.
I0906 09:39:40.529319       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 09:39:40.529337       1 controller.go:753] Finished updateLoadBalancerHosts
I0906 09:39:40.529342       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
... skipping 12 lines ...
I0906 09:39:41.152940       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde337491b6ebf, ext:145332905625, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:41.153060       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde337491f1f47, ext:145333147425, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:41.153079       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0906 09:39:41.153145       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0906 09:39:41.153172       1 daemon_controller.go:1119] Updating daemon set status
I0906 09:39:41.153292       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (2.219199ms)
I0906 09:39:41.909495       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-qkpsgb-md-0-xbzpj transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 09:39:20 +0000 UTC,LastTransitionTime:2022-09-06 09:39:09 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 09:39:40 +0000 UTC,LastTransitionTime:2022-09-06 09:39:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 09:39:41.909564       1 node_lifecycle_controller.go:1092] Node capz-qkpsgb-md-0-xbzpj ReadyCondition updated. Updating timestamp.
I0906 09:39:41.918779       1 node_lifecycle_controller.go:938] Node capz-qkpsgb-md-0-xbzpj is healthy again, removing all taints
I0906 09:39:41.918925       1 node_lifecycle_controller.go:1259] Controller detected that zone canadacentral::0 is now in state Normal.
I0906 09:39:41.918988       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-md-0-xbzpj"
I0906 09:39:41.919124       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qkpsgb-md-0-xbzpj}
I0906 09:39:41.919143       1 taint_manager.go:471] "Updating known taints on node" node="capz-qkpsgb-md-0-xbzpj" taints=[]
... skipping 28 lines ...
I0906 09:39:46.776619       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde338ae42a789, ext:150956233059, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:46.776860       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bde338ae4de225, ext:150956968959, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:46.777001       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0906 09:39:46.777151       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0906 09:39:46.777304       1 daemon_controller.go:1119] Updating daemon set status
I0906 09:39:46.777493       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (3.0191ms)
I0906 09:39:46.919194       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-qkpsgb-md-0-49d8b transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 09:39:35 +0000 UTC,LastTransitionTime:2022-09-06 09:39:15 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 09:39:45 +0000 UTC,LastTransitionTime:2022-09-06 09:39:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 09:39:46.919501       1 node_lifecycle_controller.go:1092] Node capz-qkpsgb-md-0-49d8b ReadyCondition updated. Updating timestamp.
I0906 09:39:46.949547       1 node_lifecycle_controller.go:938] Node capz-qkpsgb-md-0-49d8b is healthy again, removing all taints
I0906 09:39:46.951166       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qkpsgb-md-0-49d8b"
I0906 09:39:46.951532       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qkpsgb-md-0-49d8b}
I0906 09:39:46.951667       1 taint_manager.go:471] "Updating known taints on node" node="capz-qkpsgb-md-0-49d8b" taints=[]
I0906 09:39:46.951744       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-qkpsgb-md-0-49d8b"
... skipping 6 lines ...
I0906 09:39:49.816950       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7847f46f86" need=2 creating=2
I0906 09:39:49.817234       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/csi-azurefile-controller-7847f46f86"
I0906 09:39:49.817719       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7847f46f86 to 2"
I0906 09:39:49.837402       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0906 09:39:49.837806       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-06 09:39:49.817472675 +0000 UTC m=+153.997585533 - now: 2022-09-06 09:39:49.837796765 +0000 UTC m=+154.017909723]
I0906 09:39:49.853353       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-9rss8
I0906 09:39:49.854445       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-9rss8 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-9rss8", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"a9f41868-f0ec-404e-b3bd-b4bac0798d70", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2022, time.September, 6, 9, 39, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"b7ae0b9c-9700-44e2-8a6e-9e1a156c9241", Controller:(*bool)(0xc00201bcf7), BlockOwnerDeletion:(*bool)(0xc00201bcf8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 6, 9, 39, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f16fd8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001f16ff0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001f17008), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-r2ffc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000cfc2a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r2ffc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r2ffc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r2ffc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r2ffc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r2ffc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000cfc920)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r2ffc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0024af340), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00226c0e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b8e620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00226c150)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00226c170)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc00226c178), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00226c17c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002502c80), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0906 09:39:49.855921       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bde33970b0bc8f, ext:153997001933, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:49.855014       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-9rss8"
I0906 09:39:49.855025       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-9rss8"
I0906 09:39:49.856662       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-9rss8, PodDisruptionBudget controller will avoid syncing.
I0906 09:39:49.855061       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-9rss8" podUID=a9f41868-f0ec-404e-b3bd-b4bac0798d70
I0906 09:39:49.855891       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-9rss8"
I0906 09:39:49.856900       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-9rss8"
I0906 09:39:49.865237       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="53.071475ms"
I0906 09:39:49.865454       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:39:49.865625       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-06 09:39:49.865606951 +0000 UTC m=+154.045719809"
I0906 09:39:49.867214       1 replica_set.go:457] Pod csi-azurefile-controller-7847f46f86-9rss8 updated, objectMeta {Name:csi-azurefile-controller-7847f46f86-9rss8 GenerateName:csi-azurefile-controller-7847f46f86- Namespace:kube-system SelfLink: UID:a9f41868-f0ec-404e-b3bd-b4bac0798d70 ResourceVersion:918 Generation:0 CreationTimestamp:2022-09-06 09:39:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7847f46f86] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7847f46f86 UID:b7ae0b9c-9700-44e2-8a6e-9e1a156c9241 Controller:0xc00201bcf7 BlockOwnerDeletion:0xc00201bcf8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 09:39:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7ae0b9c-9700-44e2-8a6e-9e1a156c9241\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-7847f46f86-9rss8 GenerateName:csi-azurefile-controller-7847f46f86- Namespace:kube-system SelfLink: UID:a9f41868-f0ec-404e-b3bd-b4bac0798d70 ResourceVersion:919 Generation:0 CreationTimestamp:2022-09-06 09:39:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7847f46f86] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7847f46f86 UID:b7ae0b9c-9700-44e2-8a6e-9e1a156c9241 Controller:0xc00226d687 BlockOwnerDeletion:0xc00226d688}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 09:39:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7ae0b9c-9700-44e2-8a6e-9e1a156c9241\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0906 09:39:49.867764       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-06 09:39:49 +0000 UTC - now: 2022-09-06 09:39:49.86775685 +0000 UTC m=+154.047869808]
I0906 09:39:49.867846       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-9rss8"
I0906 09:39:49.867854       1 disruption.go:494] updatePod called on pod "csi-azurefile-controller-7847f46f86-9rss8"
I0906 09:39:49.868649       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-9rss8, PodDisruptionBudget controller will avoid syncing.
... skipping 3 lines ...
I0906 09:39:49.891649       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-kj2vb"
I0906 09:39:49.891840       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-kj2vb, PodDisruptionBudget controller will avoid syncing.
I0906 09:39:49.892010       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-kj2vb"
I0906 09:39:49.892507       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-kj2vb" podUID=b54e28ea-4296-4724-bf0a-304217a4bf35
I0906 09:39:49.892719       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-kj2vb"
I0906 09:39:49.892794       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-kj2vb"
I0906 09:39:49.892151       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-kj2vb created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-kj2vb", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"b54e28ea-4296-4724-bf0a-304217a4bf35", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2022, time.September, 6, 9, 39, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"b7ae0b9c-9700-44e2-8a6e-9e1a156c9241", Controller:(*bool)(0xc00210f607), BlockOwnerDeletion:(*bool)(0xc00210f608)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 6, 9, 39, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d419b0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001d419c8), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001d419e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-dzh6m", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001c2e3e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzh6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzh6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzh6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzh6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzh6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c2e500)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzh6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0024f1040), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00210fa80), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b88850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00210faf0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00210fb10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc00210fb18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00210fb1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0028d0e80), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0906 09:39:49.893880       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bde33970b0bc8f, ext:153997001933, loc:(*time.Location)(0x6f10040)}}
I0906 09:39:49.910508       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0906 09:39:49.910882       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="45.260479ms"
I0906 09:39:49.911068       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-06 09:39:49.91104873 +0000 UTC m=+154.091161588"
I0906 09:39:49.912563       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-06 09:39:49 +0000 UTC - now: 2022-09-06 09:39:49.912553729 +0000 UTC m=+154.092666587]
I0906 09:39:49.916170       1 progress.go:195] Queueing up deployment "csi-azurefile-controller" for a progress check after 599s
... skipping 220 lines ...
I0906 09:39:54.422473       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-snapshot-controller-84ccd6c756-d2n74"
I0906 09:39:54.422489       1 disruption.go:479] addPod called on pod "csi-snapshot-controller-84ccd6c756-d2n74"
I0906 09:39:54.422515       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-snapshot-controller-84ccd6c756-d2n74, PodDisruptionBudget controller will avoid syncing.
I0906 09:39:54.422521       1 disruption.go:482] No matching pdb for pod "csi-snapshot-controller-84ccd6c756-d2n74"
I0906 09:39:54.422540       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-84ccd6c756-d2n74" podUID=6cfa99ef-7d73-4431-be46-f1dcfb79d757
I0906 09:39:54.422669       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="16.827298ms"
I0906 09:39:54.422691       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:39:54.422720       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-06 09:39:54.422705359 +0000 UTC m=+158.602818217"
I0906 09:39:54.423070       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-06 09:39:54 +0000 UTC - now: 2022-09-06 09:39:54.423061759 +0000 UTC m=+158.603174617]
I0906 09:39:54.429493       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
I0906 09:39:54.429725       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="7.0068ms"
I0906 09:39:54.429754       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-06 09:39:54.429739259 +0000 UTC m=+158.609852217"
I0906 09:39:54.430118       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-06 09:39:54 +0000 UTC - now: 2022-09-06 09:39:54.430110859 +0000 UTC m=+158.610223817]
... skipping 1556 lines ...
I0906 09:44:13.334715       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-tqw4n-b48bbdc59-59mkj, PodDisruptionBudget controller will avoid syncing.
I0906 09:44:13.334721       1 disruption.go:497] No matching pdb for pod "azurefile-volume-tester-tqw4n-b48bbdc59-59mkj"
I0906 09:44:13.334764       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-tqw4n-b48bbdc59", timestamp:time.Time{wall:0xc0bde37b4b051b39, ext:417364996983, loc:(*time.Location)(0x6f10040)}}
I0906 09:44:13.334816       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-tqw4n-b48bbdc59" (54µs)
I0906 09:44:13.334842       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azurefile-1563/azurefile-volume-tester-tqw4n-b48bbdc59"
I0906 09:44:13.337926       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" duration="157.616531ms"
I0906 09:44:13.337971       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-tqw4n\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:44:13.338007       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" startTime="2022-09-06 09:44:13.337989555 +0000 UTC m=+417.518102413"
I0906 09:44:13.343662       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" duration="5.641401ms"
I0906 09:44:13.343699       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" startTime="2022-09-06 09:44:13.343684656 +0000 UTC m=+417.523797514"
I0906 09:44:13.344134       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n"
I0906 09:44:13.381590       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" duration="37.883608ms"
I0906 09:44:13.381626       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-tqw4n\": the object has been modified; please apply your changes to the latest version and try again"
I0906 09:44:13.381662       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" startTime="2022-09-06 09:44:13.381645564 +0000 UTC m=+417.561758422"
I0906 09:44:13.382028       1 deployment_util.go:775] Deployment "azurefile-volume-tester-tqw4n" timed out (false) [last progress check: 2022-09-06 09:44:13 +0000 UTC - now: 2022-09-06 09:44:13.382019764 +0000 UTC m=+417.562132722]
I0906 09:44:13.382057       1 progress.go:195] Queueing up deployment "azurefile-volume-tester-tqw4n" for a progress check after 599s
I0906 09:44:13.382075       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tqw4n" duration="420.5µs"
I0906 09:44:13.382102       1 disruption.go:494] updatePod called on pod "azurefile-volume-tester-tqw4n-b48bbdc59-59mkj"
I0906 09:44:13.382120       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-tqw4n-b48bbdc59-59mkj, PodDisruptionBudget controller will avoid syncing.
... skipping 1180 lines ...
I0906 09:46:55.969572       1 publisher.go:186] Finished syncing namespace "azurefile-8666" (1.7612ms)
2022/09/06 09:46:56 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 321.464 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-rr8rb, container manager
STEP: Dumping workload cluster default/capz-qkpsgb logs
Sep  6 09:48:31.787: INFO: Collecting logs for Linux node capz-qkpsgb-control-plane-qdcbm in cluster capz-qkpsgb in namespace default

Sep  6 09:49:31.789: INFO: Collecting boot logs for AzureMachine capz-qkpsgb-control-plane-qdcbm

Failed to get logs for machine capz-qkpsgb-control-plane-jt9zc, cluster default/capz-qkpsgb: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  6 09:49:32.625: INFO: Collecting logs for Linux node capz-qkpsgb-md-0-49d8b in cluster capz-qkpsgb in namespace default

Sep  6 09:50:32.628: INFO: Collecting boot logs for AzureMachine capz-qkpsgb-md-0-49d8b

Failed to get logs for machine capz-qkpsgb-md-0-689b488c49-blb8n, cluster default/capz-qkpsgb: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  6 09:50:32.906: INFO: Collecting logs for Linux node capz-qkpsgb-md-0-xbzpj in cluster capz-qkpsgb in namespace default

Sep  6 09:51:32.907: INFO: Collecting boot logs for AzureMachine capz-qkpsgb-md-0-xbzpj

Failed to get logs for machine capz-qkpsgb-md-0-689b488c49-g6lcn, cluster default/capz-qkpsgb: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-qkpsgb kube-system pod logs
STEP: Fetching kube-system pod logs took 395.521857ms
STEP: Collecting events for Pod kube-system/calico-kube-controllers-755ff8d7b5-h5rsm
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-9rss8
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-m9kvf, container coredns
STEP: Dumping workload cluster default/capz-qkpsgb Azure activity log
... skipping 14 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-qkpsgb-control-plane-qdcbm, container etcd
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-9rss8, container csi-attacher
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-kj2vb, container csi-snapshotter
STEP: Collecting events for Pod kube-system/calico-node-fgkvr
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-9rss8, container csi-resizer
STEP: Collecting events for Pod kube-system/etcd-capz-qkpsgb-control-plane-qdcbm
STEP: failed to find events of Pod "etcd-capz-qkpsgb-control-plane-qdcbm"
STEP: Collecting events for Pod kube-system/calico-node-9xknw
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-qkpsgb-control-plane-qdcbm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-h5rsm, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/kube-proxy-s4w9p
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-kj2vb, container csi-resizer
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-qkpsgb-control-plane-qdcbm
STEP: failed to find events of Pod "kube-apiserver-capz-qkpsgb-control-plane-qdcbm"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-qkpsgb-control-plane-qdcbm, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-9rss8, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/kube-proxy-ts5gk, container kube-proxy
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-9rss8, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-kj2vb, container liveness-probe
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-qkpsgb-control-plane-qdcbm, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-qkpsgb-control-plane-qdcbm
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-9rss8, container azurefile
STEP: Collecting events for Pod kube-system/kube-proxy-ts5gk
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-qkpsgb-control-plane-qdcbm
STEP: Collecting events for Pod kube-system/csi-azurefile-node-5pq4x
STEP: Creating log watcher for controller kube-system/kube-proxy-kb5ft, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-kb5ft
STEP: failed to find events of Pod "kube-controller-manager-capz-qkpsgb-control-plane-qdcbm"
STEP: Creating log watcher for controller kube-system/kube-proxy-s4w9p, container kube-proxy
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pp4v8, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pp4v8, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-kj2vb, container azurefile
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-kj2vb
STEP: Collecting events for Pod kube-system/metrics-server-76f7667fbf-8pldm
... skipping 4 lines ...
STEP: Collecting events for Pod kube-system/csi-azurefile-node-pp4v8
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pp4v8, container azurefile
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-zrvz5, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-zrvz5, container azurefile
STEP: Collecting events for Pod kube-system/csi-azurefile-node-zrvz5
STEP: Creating log watcher for controller kube-system/metrics-server-76f7667fbf-8pldm, container metrics-server
STEP: failed to find events of Pod "kube-scheduler-capz-qkpsgb-control-plane-qdcbm"
STEP: Fetching activity logs took 4.403970973s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-qkpsgb" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...