This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-06 23:46
Elapsed31m35s
Revision
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 703 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 134 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-qby011-kubeconfig; do sleep 1; done"
capz-qby011-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-qby011-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-qby011-control-plane-4cfgz   NotReady   control-plane   8s    v1.26.0-alpha.0.389+ed967343f4a035
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-qby011-control-plane-4cfgz condition met
node/capz-qby011-mp-0000000 condition met
... skipping 53 lines ...
Pre-Provisioned 
  should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77
STEP: Creating a kubernetes client
Sep  7 00:02:32.183: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Sep  7 00:02:32.624: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/07 00:02:32 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/07 00:02:33 Check successfully
... skipping 180 lines ...
Sep  7 00:03:01.013: INFO: PersistentVolumeClaim pvc-wmxcz found but phase is Pending instead of Bound.
Sep  7 00:03:03.068: INFO: PersistentVolumeClaim pvc-wmxcz found and phase=Bound (24.707491926s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  7 00:03:03.231: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-gmfhz" in namespace "azurefile-5194" to be "Succeeded or Failed"
Sep  7 00:03:03.284: INFO: Pod "azurefile-volume-tester-gmfhz": Phase="Pending", Reason="", readiness=false. Elapsed: 53.179426ms
Sep  7 00:03:05.342: INFO: Pod "azurefile-volume-tester-gmfhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110353872s
Sep  7 00:03:07.398: INFO: Pod "azurefile-volume-tester-gmfhz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166801158s
Sep  7 00:03:09.455: INFO: Pod "azurefile-volume-tester-gmfhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223666457s
STEP: Saw pod success
Sep  7 00:03:09.455: INFO: Pod "azurefile-volume-tester-gmfhz" satisfied condition "Succeeded or Failed"
Sep  7 00:03:09.455: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-gmfhz"
Sep  7 00:03:09.520: INFO: Pod azurefile-volume-tester-gmfhz has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-gmfhz in namespace azurefile-5194
Sep  7 00:03:09.586: INFO: deleting PVC "azurefile-5194"/"pvc-wmxcz"
Sep  7 00:03:09.586: INFO: Deleting PersistentVolumeClaim "pvc-wmxcz"
... skipping 158 lines ...
Sep  7 00:05:04.729: INFO: PersistentVolumeClaim pvc-lzhvx found but phase is Pending instead of Bound.
Sep  7 00:05:06.784: INFO: PersistentVolumeClaim pvc-lzhvx found and phase=Bound (26.76981743s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  7 00:05:06.947: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-68c66" in namespace "azurefile-156" to be "Error status code"
Sep  7 00:05:07.000: INFO: Pod "azurefile-volume-tester-68c66": Phase="Pending", Reason="", readiness=false. Elapsed: 53.190301ms
Sep  7 00:05:09.058: INFO: Pod "azurefile-volume-tester-68c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110613068s
Sep  7 00:05:11.329: INFO: Pod "azurefile-volume-tester-68c66": Phase="Failed", Reason="", readiness=false. Elapsed: 4.382154845s
STEP: Saw pod failure
Sep  7 00:05:11.329: INFO: Pod "azurefile-volume-tester-68c66" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 00:05:11.392: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-68c66"
Sep  7 00:05:11.447: INFO: Pod azurefile-volume-tester-68c66 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-68c66 in namespace azurefile-156
Sep  7 00:05:11.512: INFO: deleting PVC "azurefile-156"/"pvc-lzhvx"
... skipping 181 lines ...
Sep  7 00:07:07.299: INFO: PersistentVolumeClaim pvc-tf54f found but phase is Pending instead of Bound.
Sep  7 00:07:09.355: INFO: PersistentVolumeClaim pvc-tf54f found and phase=Bound (2.10961107s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  7 00:07:09.520: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-6grwn" in namespace "azurefile-2546" to be "Succeeded or Failed"
Sep  7 00:07:09.574: INFO: Pod "azurefile-volume-tester-6grwn": Phase="Pending", Reason="", readiness=false. Elapsed: 53.837627ms
Sep  7 00:07:11.632: INFO: Pod "azurefile-volume-tester-6grwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111850539s
Sep  7 00:07:13.691: INFO: Pod "azurefile-volume-tester-6grwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170191804s
STEP: Saw pod success
Sep  7 00:07:13.691: INFO: Pod "azurefile-volume-tester-6grwn" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  7 00:07:44.631: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-6grwn"
... skipping 728 lines ...
I0906 23:57:33.846636       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662508653\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662508653\" (2022-09-06 22:57:32 +0000 UTC to 2023-09-06 22:57:32 +0000 UTC (now=2022-09-06 23:57:33.84660692 +0000 UTC))"
I0906 23:57:33.846998       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662508653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662508653\" (2022-09-06 22:57:33 +0000 UTC to 2023-09-06 22:57:33 +0000 UTC (now=2022-09-06 23:57:33.846968731 +0000 UTC))"
I0906 23:57:33.847103       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0906 23:57:33.847499       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0906 23:57:33.848380       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0906 23:57:33.848669       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0906 23:57:36.714414       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0906 23:57:36.714732       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0906 23:57:39.329618       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0906 23:57:39.330093       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-qby011-control-plane-4cfgz_0b4fe40a-0a3d-410d-9e36-6c4cdd067018 became leader"
W0906 23:57:39.351182       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0906 23:57:39.352747       1 azure_auth.go:232] Using AzurePublicCloud environment
I0906 23:57:39.352987       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0906 23:57:39.353120       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0906 23:57:39.354937       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0906 23:57:39.355146       1 reflector.go:221] Starting reflector *v1.ServiceAccount (20h10m17.182159999s) from vendor/k8s.io/client-go/informers/factory.go:134
I0906 23:57:39.355281       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0906 23:57:39.355309       1 reflector.go:221] Starting reflector *v1.Secret (20h10m17.182159999s) from vendor/k8s.io/client-go/informers/factory.go:134
I0906 23:57:39.355451       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0906 23:57:39.355160       1 shared_informer.go:255] Waiting for caches to sync for tokens
W0906 23:57:39.416618       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0906 23:57:39.416846       1 controllermanager.go:573] Starting "garbagecollector"
I0906 23:57:39.430202       1 controllermanager.go:602] Started "garbagecollector"
I0906 23:57:39.430229       1 controllermanager.go:573] Starting "service"
I0906 23:57:39.431830       1 garbagecollector.go:154] Starting garbage collector controller
I0906 23:57:39.431854       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0906 23:57:39.431868       1 graph_builder.go:275] garbage controller monitor not synced: no monitors
... skipping 140 lines ...
I0906 23:57:41.235314       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0906 23:57:41.235322       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0906 23:57:41.235337       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 23:57:41.235347       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0906 23:57:41.235357       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0906 23:57:41.235391       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0906 23:57:41.235414       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 23:57:41.235421       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0906 23:57:41.235518       1 controllermanager.go:602] Started "persistentvolume-binder"
I0906 23:57:41.235564       1 controllermanager.go:573] Starting "endpoint"
I0906 23:57:41.235662       1 pv_controller_base.go:318] Starting persistent volume controller
I0906 23:57:41.235669       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0906 23:57:41.385357       1 controllermanager.go:602] Started "endpoint"
... skipping 78 lines ...
I0906 23:57:43.386116       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0906 23:57:43.386235       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0906 23:57:43.386340       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 23:57:43.386361       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0906 23:57:43.386374       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0906 23:57:43.386389       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0906 23:57:43.386442       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 23:57:43.386484       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0906 23:57:43.386722       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-control-plane-4cfgz"
W0906 23:57:43.386746       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-qby011-control-plane-4cfgz" does not exist
I0906 23:57:43.386774       1 controllermanager.go:602] Started "attachdetach"
I0906 23:57:43.386786       1 controllermanager.go:573] Starting "persistentvolume-expander"
I0906 23:57:43.386963       1 attach_detach_controller.go:328] Starting attach detach controller
I0906 23:57:43.387090       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0906 23:57:43.535070       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 23:57:43.535125       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
... skipping 319 lines ...
I0906 23:57:44.292912       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-84994b8c4 to 2"
I0906 23:57:44.293521       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-84994b8c4" need=2 creating=2
I0906 23:57:44.303539       1 request.go:614] Waited for 366.399467ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/flowcontrol.apiserver.k8s.io/v1beta2/prioritylevelconfigurations?limit=500&resourceVersion=0
I0906 23:57:44.307184       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0906 23:57:44.307505       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 23:57:44.292480731 +0000 UTC m=+11.947716968 - now: 2022-09-06 23:57:44.307495329 +0000 UTC m=+11.962731566]
I0906 23:57:44.311447       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="414.319654ms"
I0906 23:57:44.311482       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0906 23:57:44.311513       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 23:57:44.311500396 +0000 UTC m=+11.966736733"
I0906 23:57:44.312021       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 23:57:44 +0000 UTC - now: 2022-09-06 23:57:44.31201393 +0000 UTC m=+11.967250267]
I0906 23:57:44.316095       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0906 23:57:44.316648       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="5.138441ms"
I0906 23:57:44.316684       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 23:57:44.316671139 +0000 UTC m=+11.971907376"
I0906 23:57:44.317281       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 23:57:44 +0000 UTC - now: 2022-09-06 23:57:44.31727528 +0000 UTC m=+11.972511517]
... skipping 286 lines ...
I0906 23:58:00.128537       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (21.600952ms)
I0906 23:58:00.128571       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-755ff8d7b5-5qphv" podUID=06dda588-da4b-402d-9288-adac9432a302
I0906 23:58:00.129173       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0be15820661828d, ext:27762289946, loc:(*time.Location)(0x6f10040)}}
I0906 23:58:00.129576       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-755ff8d7b5"
I0906 23:58:00.129718       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0906 23:58:00.133385       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="30.581394ms"
I0906 23:58:00.133501       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0906 23:58:00.133583       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 23:58:00.133562237 +0000 UTC m=+27.788798574"
I0906 23:58:00.134081       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-06 23:58:00 +0000 UTC - now: 2022-09-06 23:58:00.13407212 +0000 UTC m=+27.789308457]
I0906 23:58:00.140440       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0906 23:58:00.140494       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0906 23:58:00.140696       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="7.125268ms"
I0906 23:58:00.140847       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 23:58:00.140832771 +0000 UTC m=+27.796069108"
... skipping 316 lines ...
I0906 23:58:21.489870       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (89.018µs)
I0906 23:58:21.489895       1 disruption.go:494] updatePod called on pod "calico-kube-controllers-755ff8d7b5-5qphv"
I0906 23:58:21.489912       1 disruption.go:499] updatePod "calico-kube-controllers-755ff8d7b5-5qphv" -> PDB "calico-kube-controllers"
I0906 23:58:21.489953       1 disruption.go:659] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (24.805µs)
I0906 23:58:23.923637       1 gc_controller.go:221] GC'ing orphaned
I0906 23:58:23.923668       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0906 23:58:23.927953       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-qby011-control-plane-4cfgz transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 23:58:11 +0000 UTC,LastTransitionTime:2022-09-06 23:57:21 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 23:58:21 +0000 UTC,LastTransitionTime:2022-09-06 23:58:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 23:58:23.928070       1 node_lifecycle_controller.go:1092] Node capz-qby011-control-plane-4cfgz ReadyCondition updated. Updating timestamp.
I0906 23:58:23.928099       1 node_lifecycle_controller.go:938] Node capz-qby011-control-plane-4cfgz is healthy again, removing all taints
I0906 23:58:23.928145       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0906 23:58:26.036856       1 disruption.go:494] updatePod called on pod "calico-node-dn5f8"
I0906 23:58:26.036944       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-node-dn5f8, PodDisruptionBudget controller will avoid syncing.
I0906 23:58:26.036953       1 disruption.go:497] No matching pdb for pod "calico-node-dn5f8"
... skipping 220 lines ...
I0906 23:59:31.026818       1 taint_manager.go:471] "Updating known taints on node" node="capz-qby011-mp-0000001" taints=[]
I0906 23:59:31.025285       1 controller.go:686] It took 6.7002e-05 seconds to finish syncNodes
I0906 23:59:31.029175       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be1589af384dd4, ext:58447455329, loc:(*time.Location)(0x6f10040)}}
I0906 23:59:31.029294       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be1598c1bee80e, ext:118684524799, loc:(*time.Location)(0x6f10040)}}
I0906 23:59:31.029338       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-qby011-mp-0000001], creating 1
I0906 23:59:31.029744       1 topologycache.go:179] Ignoring node capz-qby011-control-plane-4cfgz because it has an excluded label
I0906 23:59:31.032160       1 topologycache.go:183] Ignoring node capz-qby011-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-06 23:59:31 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 23:59:31 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 23:59:31 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 23:59:31 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0906 23:59:31.032232       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0906 23:59:31.030010       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
W0906 23:59:31.032278       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-qby011-mp-0000001" does not exist
I0906 23:59:31.051084       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
I0906 23:59:31.064039       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
I0906 23:59:31.064445       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-qby011-mp-0000001" new_ttl="0s"
I0906 23:59:31.079538       1 disruption.go:479] addPod called on pod "calico-node-zsrtb"
I0906 23:59:31.080164       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-node-zsrtb, PodDisruptionBudget controller will avoid syncing.
I0906 23:59:31.080183       1 disruption.go:482] No matching pdb for pod "calico-node-zsrtb"
... skipping 203 lines ...
I0906 23:59:48.975626       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 23:59:48.975809       1 controller.go:753] Finished updateLoadBalancerHosts
I0906 23:59:48.975952       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0906 23:59:48.976074       1 controller.go:686] It took 0.002550776 seconds to finish syncNodes
I0906 23:59:48.983383       1 topologycache.go:179] Ignoring node capz-qby011-control-plane-4cfgz because it has an excluded label
I0906 23:59:48.983619       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000000"
W0906 23:59:48.987101       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-qby011-mp-0000000" does not exist
I0906 23:59:48.974530       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qby011-mp-0000000}
I0906 23:59:48.984099       1 topologycache.go:183] Ignoring node capz-qby011-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-06 23:59:41 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 23:59:41 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 23:59:41 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 23:59:41 +0000 UTC 2022-09-06 23:59:31 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0906 23:59:48.974458       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be159accec5db6, ext:126872053415, loc:(*time.Location)(0x6f10040)}}
I0906 23:59:48.984050       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be159c8f43e378, ext:133911343621, loc:(*time.Location)(0x6f10040)}}
I0906 23:59:48.987512       1 taint_manager.go:471] "Updating known taints on node" node="capz-qby011-mp-0000000" taints=[]
I0906 23:59:48.987631       1 topologycache.go:183] Ignoring node capz-qby011-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-06 23:59:48 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 23:59:48 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 23:59:48 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 23:59:48 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-qby011-mp-0000000" not found]}]
I0906 23:59:48.988076       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0906 23:59:48.988029       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be159d3ae40484, ext:136643258229, loc:(*time.Location)(0x6f10040)}}
I0906 23:59:48.988136       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-qby011-mp-0000000], creating 1
I0906 23:59:48.988651       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be159d3aed8a17, ext:136643882248, loc:(*time.Location)(0x6f10040)}}
I0906 23:59:48.988678       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-qby011-mp-0000000], creating 1
I0906 23:59:48.996347       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-qby011-mp-0000000" new_ttl="0s"
... skipping 190 lines ...
I0907 00:00:01.225283       1 controller.go:690] Syncing backends for all LB services.
I0907 00:00:01.225307       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 00:00:01.225321       1 controller.go:753] Finished updateLoadBalancerHosts
I0907 00:00:01.225327       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 00:00:01.225334       1 controller.go:686] It took 5.8502e-05 seconds to finish syncNodes
I0907 00:00:01.225456       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-qby011-mp-0000001"
I0907 00:00:01.225909       1 topologycache.go:183] Ignoring node capz-qby011-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-06 23:59:59 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-06 23:59:59 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-06 23:59:59 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-06 23:59:59 +0000 UTC 2022-09-06 23:59:48 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0907 00:00:01.227820       1 topologycache.go:179] Ignoring node capz-qby011-control-plane-4cfgz because it has an excluded label
I0907 00:00:01.227865       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0907 00:00:01.227759       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
I0907 00:00:01.238033       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-qby011-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0907 00:00:01.238296       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
I0907 00:00:02.797824       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
... skipping 14 lines ...
I0907 00:00:03.321197       1 daemon_controller.go:1119] Updating daemon set status
I0907 00:00:03.321264       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (1.567948ms)
I0907 00:00:03.632803       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-qby011-mp-0000000"
I0907 00:00:03.632855       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Creating" for node "capz-qby011-mp-0000000"
I0907 00:00:03.926558       1 gc_controller.go:221] GC'ing orphaned
I0907 00:00:03.926771       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0907 00:00:03.942161       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-qby011-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 23:59:41 +0000 UTC,LastTransitionTime:2022-09-06 23:59:31 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 00:00:01 +0000 UTC,LastTransitionTime:2022-09-07 00:00:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 00:00:03.942237       1 node_lifecycle_controller.go:1092] Node capz-qby011-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 00:00:03.950852       1 node_lifecycle_controller.go:938] Node capz-qby011-mp-0000001 is healthy again, removing all taints
I0907 00:00:03.951189       1 node_lifecycle_controller.go:1092] Node capz-qby011-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 00:00:03.951409       1 node_lifecycle_controller.go:1259] Controller detected that zone westus2::1 is now in state Normal.
I0907 00:00:03.952370       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qby011-mp-0000001}
I0907 00:00:03.952399       1 taint_manager.go:471] "Updating known taints on node" node="capz-qby011-mp-0000001" taints=[]
... skipping 180 lines ...
I0907 00:00:29.746901       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000000"
I0907 00:00:29.784534       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-qby011-mp-0000000" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0907 00:00:29.787287       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000000"
I0907 00:00:30.404006       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="90.303µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:48196" resp=200
I0907 00:00:31.923379       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000001"
I0907 00:00:33.955936       1 node_lifecycle_controller.go:1092] Node capz-qby011-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 00:00:33.956026       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-qby011-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 00:00:09 +0000 UTC,LastTransitionTime:2022-09-06 23:59:48 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 00:00:29 +0000 UTC,LastTransitionTime:2022-09-07 00:00:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 00:00:33.956282       1 node_lifecycle_controller.go:1092] Node capz-qby011-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 00:00:33.968196       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-qby011-mp-0000000}
I0907 00:00:33.968231       1 taint_manager.go:471] "Updating known taints on node" node="capz-qby011-mp-0000000" taints=[]
I0907 00:00:33.968251       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-qby011-mp-0000000"
I0907 00:00:33.968817       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-qby011-mp-0000000"
I0907 00:00:33.969103       1 node_lifecycle_controller.go:938] Node capz-qby011-mp-0000000 is healthy again, removing all taints
... skipping 7 lines ...
I0907 00:00:35.296217       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7847f46f86" need=2 creating=2
I0907 00:00:35.295289       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7847f46f86 to 2"
I0907 00:00:35.295314       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/csi-azurefile-controller-7847f46f86"
I0907 00:00:35.304809       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-07 00:00:35.294769248 +0000 UTC m=+182.950005485 - now: 2022-09-07 00:00:35.304802656 +0000 UTC m=+182.960038893]
I0907 00:00:35.306344       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0907 00:00:35.311919       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="39.924325ms"
I0907 00:00:35.312144       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0907 00:00:35.312308       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-07 00:00:35.312289686 +0000 UTC m=+182.967526023"
I0907 00:00:35.315039       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-07 00:00:35 +0000 UTC - now: 2022-09-07 00:00:35.31503237 +0000 UTC m=+182.970268607]
I0907 00:00:35.326070       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-tqnsz"
I0907 00:00:35.326365       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-tqnsz, PodDisruptionBudget controller will avoid syncing.
I0907 00:00:35.330706       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-tqnsz"
I0907 00:00:35.326683       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-tqnsz created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-tqnsz", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"ac7f0e0a-2080-43ca-8586-c1e79d50cc11", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 0, 0, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"49379917-dbe3-47a2-962d-0f64321fdf6c", Controller:(*bool)(0xc0028bd217), BlockOwnerDeletion:(*bool)(0xc0028bd218)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 0, 0, 35, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028be1c8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0028be1e0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0028be1f8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-8ntxn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0026d7380), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8ntxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8ntxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8ntxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8ntxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8ntxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0026d74a0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8ntxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002e5c1c0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028bd5c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001d3490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028bd630)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028bd650)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0028bd658), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028bd65c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002e358f0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 00:00:35.332878       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-tqnsz"
I0907 00:00:35.333387       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-tqnsz" podUID=ac7f0e0a-2080-43ca-8586-c1e79d50cc11
I0907 00:00:35.334720       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0907 00:00:35.335531       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="23.229612ms"
I0907 00:00:35.335580       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-07 00:00:35.3355626 +0000 UTC m=+182.990798837"
I0907 00:00:35.336363       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0be15a8d19dc63c, ext:182950788909, loc:(*time.Location)(0x6f10040)}}
... skipping 12 lines ...
I0907 00:00:35.362305       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-7w5rv"
I0907 00:00:35.362947       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-7w5rv"
I0907 00:00:35.363153       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-7w5rv, PodDisruptionBudget controller will avoid syncing.
I0907 00:00:35.363396       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-7w5rv"
I0907 00:00:35.363580       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-7w5rv"
I0907 00:00:35.363785       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-7w5rv" podUID=bebb6069-e640-4e57-8f4d-fd890200ccfc
I0907 00:00:35.359703       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-7w5rv created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-7w5rv", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"bebb6069-e640-4e57-8f4d-fd890200ccfc", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 0, 0, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"49379917-dbe3-47a2-962d-0f64321fdf6c", Controller:(*bool)(0xc0028fe447), BlockOwnerDeletion:(*bool)(0xc0028fe448)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 0, 0, 35, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028bf488), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0028bf4a0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0028bf4b8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-pgf4f", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0026d7c40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-pgf4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-pgf4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-pgf4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-pgf4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-pgf4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0026d7d60)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-pgf4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002ec7a00), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028fe7f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000239420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028fe860)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028fe880)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0028fe888), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028fe88c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002ed6f80), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 00:00:35.369272       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0be15a8d19dc63c, ext:182950788909, loc:(*time.Location)(0x6f10040)}}
I0907 00:00:35.371188       1 replica_set.go:457] Pod csi-azurefile-controller-7847f46f86-7w5rv updated, objectMeta {Name:csi-azurefile-controller-7847f46f86-7w5rv GenerateName:csi-azurefile-controller-7847f46f86- Namespace:kube-system SelfLink: UID:bebb6069-e640-4e57-8f4d-fd890200ccfc ResourceVersion:914 Generation:0 CreationTimestamp:2022-09-07 00:00:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7847f46f86] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7847f46f86 UID:49379917-dbe3-47a2-962d-0f64321fdf6c Controller:0xc0028fe447 BlockOwnerDeletion:0xc0028fe448}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 00:00:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49379917-dbe3-47a2-962d-0f64321fdf6c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-7847f46f86-7w5rv GenerateName:csi-azurefile-controller-7847f46f86- Namespace:kube-system SelfLink: UID:bebb6069-e640-4e57-8f4d-fd890200ccfc ResourceVersion:917 Generation:0 CreationTimestamp:2022-09-07 00:00:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7847f46f86] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7847f46f86 UID:49379917-dbe3-47a2-962d-0f64321fdf6c Controller:0xc002f0ecd7 BlockOwnerDeletion:0xc002f0ecd8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 00:00:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49379917-dbe3-47a2-962d-0f64321fdf6c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0907 00:00:35.371537       1 disruption.go:494] updatePod called on pod "csi-azurefile-controller-7847f46f86-7w5rv"
I0907 00:00:35.371700       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-7w5rv, PodDisruptionBudget controller will avoid syncing.
I0907 00:00:35.371824       1 disruption.go:497] No matching pdb for pod "csi-azurefile-controller-7847f46f86-7w5rv"
I0907 00:00:35.372019       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-7w5rv"
... skipping 1573 lines ...
I0907 00:05:19.994534       1 replica_set.go:577] "Too few replicas" replicaSet="azurefile-1563/azurefile-volume-tester-lrdjz-76d7bf9df4" need=1 creating=1
I0907 00:05:19.992211       1 event.go:294] "Event occurred" object="azurefile-1563/azurefile-volume-tester-lrdjz" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azurefile-volume-tester-lrdjz-76d7bf9df4 to 1"
I0907 00:05:19.992236       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="azurefile-1563/azurefile-volume-tester-lrdjz-76d7bf9df4"
I0907 00:05:19.998456       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-1563/azurefile-volume-tester-lrdjz"
I0907 00:05:19.998813       1 deployment_util.go:775] Deployment "azurefile-volume-tester-lrdjz" timed out (false) [last progress check: 2022-09-07 00:05:19.991414631 +0000 UTC m=+467.646650868 - now: 2022-09-07 00:05:19.998803859 +0000 UTC m=+467.654040096]
I0907 00:05:20.003768       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-lrdjz" duration="16.084933ms"
I0907 00:05:20.004282       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-lrdjz" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-lrdjz\": the object has been modified; please apply your changes to the latest version and try again"
I0907 00:05:20.003766       1 replica_set.go:394] Pod azurefile-volume-tester-lrdjz-76d7bf9df4-mhkb5 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azurefile-volume-tester-lrdjz-76d7bf9df4-mhkb5", GenerateName:"azurefile-volume-tester-lrdjz-76d7bf9df4-", Namespace:"azurefile-1563", SelfLink:"", UID:"3d828ada-77cf-47e2-b021-9df4ef7fb338", ResourceVersion:"1945", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 0, 5, 19, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azurefile-volume-tester-5199948958991797301", "pod-template-hash":"76d7bf9df4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azurefile-volume-tester-lrdjz-76d7bf9df4", UID:"4ca52cbe-f381-4990-9468-18b7c95f18d0", Controller:(*bool)(0xc000f27c77), BlockOwnerDeletion:(*bool)(0xc000f27c78)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 0, 5, 19, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00261f128), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc00261f140), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-lbfqc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002a64320), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 100; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lbfqc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f27d58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003b0cb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f27d90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f27db0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000f27db8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000f27dbc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001f00fb0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 00:05:20.004618       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-lrdjz" startTime="2022-09-07 00:05:20.00451139 +0000 UTC m=+467.659747727"
I0907 00:05:20.004532       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-lrdjz-76d7bf9df4", timestamp:time.Time{wall:0xc0be15effb1e66dc, ext:467647084393, loc:(*time.Location)(0x6f10040)}}
I0907 00:05:20.004041       1 disruption.go:479] addPod called on pod "azurefile-volume-tester-lrdjz-76d7bf9df4-mhkb5"
I0907 00:05:20.005137       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-lrdjz-76d7bf9df4-mhkb5, PodDisruptionBudget controller will avoid syncing.
I0907 00:05:20.004077       1 taint_manager.go:431] "Noticed pod update" pod="azurefile-1563/azurefile-volume-tester-lrdjz-76d7bf9df4-mhkb5"
... skipping 1255 lines ...
I0907 00:08:12.136750       1 namespace_controller.go:180] Finished syncing namespace "azurefile-6611" (163.364383ms)
2022/09/07 00:08:12 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 340.358 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 41 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-m4ctt, container manager
STEP: Dumping workload cluster default/capz-qby011 logs
Sep  7 00:10:05.120: INFO: Collecting logs for Linux node capz-qby011-control-plane-4cfgz in cluster capz-qby011 in namespace default

Sep  7 00:11:05.122: INFO: Collecting boot logs for AzureMachine capz-qby011-control-plane-4cfgz

Failed to get logs for machine capz-qby011-control-plane-x8knc, cluster default/capz-qby011: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 00:11:06.149: INFO: Collecting logs for Linux node capz-qby011-mp-0000000 in cluster capz-qby011 in namespace default

Sep  7 00:12:06.151: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-qby011-mp-0

Sep  7 00:12:06.500: INFO: Collecting logs for Linux node capz-qby011-mp-0000001 in cluster capz-qby011 in namespace default

Sep  7 00:13:06.502: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-qby011-mp-0

Failed to get logs for machine pool capz-qby011-mp-0, cluster default/capz-qby011: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-qby011 kube-system pod logs
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-hmqwc, container coredns
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-pv4gx, container azurefile
STEP: Fetching kube-system pod logs took 590.592548ms
STEP: Dumping workload cluster default/capz-qby011 Azure activity log
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-qby011-control-plane-4cfgz
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-hmqwc
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-7w5rv, container csi-provisioner
STEP: failed to find events of Pod "kube-scheduler-capz-qby011-control-plane-4cfgz"
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-7w5rv, container csi-attacher
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-7w5rv, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-7w5rv, container csi-resizer
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-7w5rv, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-7w5rv, container azurefile
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-7847f46f86-7w5rv
... skipping 6 lines ...
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-tqnsz, container csi-resizer
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-tqnsz, container liveness-probe
STEP: Creating log watcher for controller kube-system/calico-node-dn5f8, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-qby011-control-plane-4cfgz, container etcd
STEP: Collecting events for Pod kube-system/etcd-capz-qby011-control-plane-4cfgz
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-qby011-control-plane-4cfgz, container kube-apiserver
STEP: failed to find events of Pod "etcd-capz-qby011-control-plane-4cfgz"
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-qby011-control-plane-4cfgz
STEP: failed to find events of Pod "kube-apiserver-capz-qby011-control-plane-4cfgz"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-qby011-control-plane-4cfgz, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-qby011-control-plane-4cfgz
STEP: failed to find events of Pod "kube-controller-manager-capz-qby011-control-plane-4cfgz"
STEP: Collecting events for Pod kube-system/calico-node-dn5f8
STEP: Creating log watcher for controller kube-system/kube-proxy-shg9c, container kube-proxy
STEP: Collecting events for Pod kube-system/csi-azurefile-node-pv4gx
STEP: Collecting events for Pod kube-system/kube-proxy-skn5r
STEP: Creating log watcher for controller kube-system/kube-proxy-tcq4x, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-shg9c
... skipping 38 lines ...