This job view page is being replaced by Spyglass soon. Check out the new job view.
PRyingchunliu-zte: unmountVolumes check shouldPodRuntimeBeRemoved
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2022-06-22 00:31
Elapsed43m2s
Revisionc1f77b354e7291936d44bc81a71a05c46b5cc08c
Refs 110682

Test Failures


AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] 45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureFile\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\sshould\screate\sa\svolume\son\sdemand\sand\sresize\sit\s\[kubernetes\.io\/azure\-file\]\s\[file\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:356
Jun 22 01:11:00.320: newPVCSize(11Gi) is not equal to newPVSize(10GiGi)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 81 lines ...
/home/prow/go/src/k8s.io/kubernetes /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   154  100   154    0     0   4666      0 --:--:-- --:--:-- --:--:--  4666

100    33  100    33    0     0    340      0 --:--:-- --:--:-- --:--:--   340
Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_9e320e27222c5b not found: manifest unknown: manifest tagged by "v1.25.0-alpha.1.67_9e320e27222c5b" is not found
Building Kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0622 00:31:51] Verifying Prerequisites....
+++ [0622 00:31:52] Building Docker image kube-build:build-d5e5633090-5-v1.25.0-go1.18.3-bullseye.0
+++ [0622 00:34:13] Creating data container kube-build-data-d5e5633090-5-v1.25.0-go1.18.3-bullseye.0
+++ [0622 00:34:25] Syncing sources to container
... skipping 747 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 125 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-7hzix5-kubeconfig; do sleep 1; done"
capz-7hzix5-kubeconfig                 cluster.x-k8s.io/secret               1      1s
# Get kubeconfig and store it locally.
kubectl get secrets capz-7hzix5-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-7hzix5-control-plane-89kp4   NotReady   control-plane   11s   v1.25.0-alpha.1.67+9e320e27222c5b
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
The connection to the server capz-7hzix5-3d211b9.eastus.cloudapp.azure.com:6443 was refused - did you specify the right host or port?
The connection to the server capz-7hzix5-3d211b9.eastus.cloudapp.azure.com:6443 was refused - did you specify the right host or port?
... skipping 67 lines ...
Pre-Provisioned 
  should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77
STEP: Creating a kubernetes client
Jun 22 01:06:04.120: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Jun 22 01:06:04.379: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/06/22 01:06:04 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/06/22 01:06:04 Check successfully
... skipping 180 lines ...
Jun 22 01:06:30.184: INFO: PersistentVolumeClaim pvc-jms6l found but phase is Pending instead of Bound.
Jun 22 01:06:32.217: INFO: PersistentVolumeClaim pvc-jms6l found and phase=Bound (24.427184609s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jun 22 01:06:32.313: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-nbtqc" in namespace "azurefile-5194" to be "Succeeded or Failed"
Jun 22 01:06:32.344: INFO: Pod "azurefile-volume-tester-nbtqc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.350919ms
Jun 22 01:06:34.377: INFO: Pod "azurefile-volume-tester-nbtqc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063432s
Jun 22 01:06:36.412: INFO: Pod "azurefile-volume-tester-nbtqc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098485169s
Jun 22 01:06:38.446: INFO: Pod "azurefile-volume-tester-nbtqc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132840864s
STEP: Saw pod success
Jun 22 01:06:38.446: INFO: Pod "azurefile-volume-tester-nbtqc" satisfied condition "Succeeded or Failed"
Jun 22 01:06:38.446: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-nbtqc"
Jun 22 01:06:38.496: INFO: Pod azurefile-volume-tester-nbtqc has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-nbtqc in namespace azurefile-5194
Jun 22 01:06:38.538: INFO: deleting PVC "azurefile-5194"/"pvc-jms6l"
Jun 22 01:06:38.538: INFO: Deleting PersistentVolumeClaim "pvc-jms6l"
... skipping 156 lines ...
Jun 22 01:08:27.195: INFO: PersistentVolumeClaim pvc-vbngx found but phase is Pending instead of Bound.
Jun 22 01:08:29.228: INFO: PersistentVolumeClaim pvc-vbngx found and phase=Bound (22.39724913s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Jun 22 01:08:29.326: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-2vll9" in namespace "azurefile-156" to be "Error status code"
Jun 22 01:08:29.363: INFO: Pod "azurefile-volume-tester-2vll9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.680811ms
Jun 22 01:08:31.397: INFO: Pod "azurefile-volume-tester-2vll9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070140053s
Jun 22 01:08:33.431: INFO: Pod "azurefile-volume-tester-2vll9": Phase="Failed", Reason="", readiness=false. Elapsed: 4.104593369s
STEP: Saw pod failure
Jun 22 01:08:33.431: INFO: Pod "azurefile-volume-tester-2vll9" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Jun 22 01:08:33.474: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-2vll9"
Jun 22 01:08:33.509: INFO: Pod azurefile-volume-tester-2vll9 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-2vll9 in namespace azurefile-156
Jun 22 01:08:33.552: INFO: deleting PVC "azurefile-156"/"pvc-vbngx"
... skipping 179 lines ...
Jun 22 01:10:21.915: INFO: PersistentVolumeClaim pvc-92mfg found but phase is Pending instead of Bound.
Jun 22 01:10:23.947: INFO: PersistentVolumeClaim pvc-92mfg found and phase=Bound (2.062047284s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jun 22 01:10:24.047: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-bbfhq" in namespace "azurefile-2546" to be "Succeeded or Failed"
Jun 22 01:10:24.080: INFO: Pod "azurefile-volume-tester-bbfhq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.968291ms
Jun 22 01:10:26.114: INFO: Pod "azurefile-volume-tester-bbfhq": Phase="Running", Reason="", readiness=true. Elapsed: 2.066311688s
Jun 22 01:10:28.155: INFO: Pod "azurefile-volume-tester-bbfhq": Phase="Running", Reason="", readiness=false. Elapsed: 4.107476827s
Jun 22 01:10:30.189: INFO: Pod "azurefile-volume-tester-bbfhq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141216709s
STEP: Saw pod success
Jun 22 01:10:30.189: INFO: Pod "azurefile-volume-tester-bbfhq" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
Jun 22 01:11:00.320: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi)

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c
sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f
... skipping 22 lines ...
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:24 +0000 UTC - event for azurefile-volume-tester-bbfhq: {default-scheduler } Scheduled: Successfully assigned azurefile-2546/azurefile-volume-tester-bbfhq to capz-7hzix5-mp-0000000
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:25 +0000 UTC - event for azurefile-volume-tester-bbfhq: {kubelet capz-7hzix5-mp-0000000} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:25 +0000 UTC - event for azurefile-volume-tester-bbfhq: {kubelet capz-7hzix5-mp-0000000} Created: Created container volume-tester
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:25 +0000 UTC - event for azurefile-volume-tester-bbfhq: {kubelet capz-7hzix5-mp-0000000} Started: Started container volume-tester
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:30 +0000 UTC - event for pvc-92mfg: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:30 +0000 UTC - event for pvc-92mfg: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-83935a58-614d-47ca-a727-c2b1ea154658
Jun 22 01:11:05.634: INFO: At 2022-06-22 01:10:30 +0000 UTC - event for pvc-92mfg: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-83935a58-614d-47ca-a727-c2b1ea154658" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-7hzix5#f06963a1a31844d1eaf17dc#pvc-83935a58-614d-47ca-a727-c2b1ea154658#pvc-83935a58-614d-47ca-a727-c2b1ea154658#azurefile-2546) is not supported on ControllerExpandVolume
Jun 22 01:11:05.665: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jun 22 01:11:05.665: INFO: 
Jun 22 01:11:05.714: INFO: 
Logging node info for node capz-7hzix5-control-plane-89kp4
Jun 22 01:11:05.750: INFO: Node Info: &Node{ObjectMeta:{capz-7hzix5-control-plane-89kp4    82f612b1-e94b-4f50-bbcd-a08235bf9b04 2232 0 2022-06-22 01:00:40 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-7hzix5-control-plane-89kp4 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-7hzix5 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-7hzix5-control-plane-qlscx cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-7hzix5-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-7hzix5-control-plane-89kp4"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.196.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2022-06-22 01:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-22 01:00:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-22 01:02:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 01:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-22 01:03:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 01:09:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7hzix5/providers/Microsoft.Compute/virtualMachines/capz-7hzix5-control-plane-89kp4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 01:03:30 +0000 UTC,LastTransitionTime:2022-06-22 01:03:30 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 01:09:53 +0000 UTC,LastTransitionTime:2022-06-22 01:00:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 01:09:53 +0000 UTC,LastTransitionTime:2022-06-22 01:00:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 01:09:53 +0000 UTC,LastTransitionTime:2022-06-22 01:00:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:09:53 +0000 UTC,LastTransitionTime:2022-06-22 01:02:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-7hzix5-control-plane-89kp4,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:795c5732df1249b48fafff3903a93481,SystemUUID:d9a5593f-8900-534c-9e7a-3f0ef8759688,BootID:10fe235c-2aac-4643-84bb-fc05e65c8645,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+9e320e27222c5b,KubeProxyVersion:v1.25.0-alpha.1.67+9e320e27222c5b,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:39501122,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:e09b43e2783b4187389c42b7a16ede578a3473b61ea4e289e7c331ef04894e4a capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:39499245,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:33779242,},ContainerImage{Names:[capzci.azurecr.io/kube-apiserver@sha256:a9901512756a5e342dbf1c2430257ca5c55782644430d8430537167358688928 capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:33777548,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:31010102,},ContainerImage{Names:[capzci.azurecr.io/kube-controller-manager@sha256:1c570ad57702bb95cbbd40f0c6fd6cb85e274de8b1b5ed50e216d273681f1ad4 capzci.azurecr.io/kube-controller-manager:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:31009186,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:15533653,},ContainerImage{Names:[capzci.azurecr.io/kube-scheduler@sha256:d63464391d58c58aa2d55cbce0ced8155129d6d1be497f0e424d0913fdcb40eb capzci.azurecr.io/kube-scheduler:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:15531817,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 22 01:11:05.750: INFO: 
... skipping 804 lines ...
I0622 01:01:09.961105       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-06-22 00:53:29 +0000 UTC to 2032-06-19 00:58:29 +0000 UTC (now=2022-06-22 01:01:09.96108091 +0000 UTC))"
I0622 01:01:09.961288       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1655859668\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1655859668\" (2022-06-22 00:01:08 +0000 UTC to 2023-06-22 00:01:08 +0000 UTC (now=2022-06-22 01:01:09.961259527 +0000 UTC))"
I0622 01:01:09.961464       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1655859669\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1655859669\" (2022-06-22 00:01:08 +0000 UTC to 2023-06-22 00:01:08 +0000 UTC (now=2022-06-22 01:01:09.961433344 +0000 UTC))"
I0622 01:01:09.961494       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0622 01:01:09.961743       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0622 01:01:09.962153       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0622 01:01:09.962411       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:09.962438       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:01:09.962480       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0622 01:01:13.266141       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:13.266175       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:17.601913       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:17.601951       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:01:19.462021       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="135.61µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48342" resp=200
E0622 01:01:20.948733       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:20.948780       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:23.394270       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:23.394321       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:27.619034       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:27.619084       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:01:29.461349       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="143.61µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48468" resp=200
E0622 01:01:30.756118       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:30.756156       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:33.879904       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:33.879968       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:36.915349       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:36.915412       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:01:39.460534       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="87.806µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48526" resp=200
E0622 01:01:39.877393       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:39.877451       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:42.213968       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:42.214386       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:45.179983       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:45.180031       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:48.574180       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:48.574227       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:01:49.461662       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="103.907µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48594" resp=200
E0622 01:01:52.354030       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:52.354082       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:56.105218       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:56.105322       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:01:58.902878       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:01:58.902920       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:01:59.461777       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="87.905µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48646" resp=200
E0622 01:02:00.931957       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:02:00.931993       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 01:02:05.273676       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:02:05.273735       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:09.463302       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="102.307µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48690" resp=200
E0622 01:02:09.653919       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 01:02:09.653954       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:13.582587       1 leaderelection.go:352] lock is held by capz-7hzix5-control-plane-89kp4_81768957-52c3-4abe-976d-61c04dfa7e38 and has not yet expired
I0622 01:02:13.582990       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:15.889483       1 leaderelection.go:352] lock is held by capz-7hzix5-control-plane-89kp4_81768957-52c3-4abe-976d-61c04dfa7e38 and has not yet expired
I0622 01:02:15.889503       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:18.781109       1 leaderelection.go:352] lock is held by capz-7hzix5-control-plane-89kp4_81768957-52c3-4abe-976d-61c04dfa7e38 and has not yet expired
I0622 01:02:18.781178       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:19.463322       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="84.105µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:48906" resp=200
I0622 01:02:21.884095       1 leaderelection.go:352] lock is held by capz-7hzix5-control-plane-89kp4_81768957-52c3-4abe-976d-61c04dfa7e38 and has not yet expired
I0622 01:02:21.884208       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:24.147484       1 leaderelection.go:352] lock is held by capz-7hzix5-control-plane-89kp4_81768957-52c3-4abe-976d-61c04dfa7e38 and has not yet expired
I0622 01:02:24.147512       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:26.769893       1 leaderelection.go:352] lock is held by capz-7hzix5-control-plane-89kp4_81768957-52c3-4abe-976d-61c04dfa7e38 and has not yet expired
I0622 01:02:26.769921       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 01:02:29.289444       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0622 01:02:29.289964       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-7hzix5-control-plane-89kp4_75be5bcb-0b75-4e56-aaac-b15bba021750 became leader"
W0622 01:02:29.333155       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0622 01:02:29.333908       1 azure_auth.go:232] Using AzurePublicCloud environment
I0622 01:02:29.334068       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0622 01:02:29.334252       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0622 01:02:29.336888       1 reflector.go:255] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0622 01:02:29.337131       1 reflector.go:219] Starting reflector *v1.Secret (22h11m42.415259455s) from vendor/k8s.io/client-go/informers/factory.go:134
I0622 01:02:29.337486       1 reflector.go:255] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0622 01:02:29.337360       1 shared_informer.go:255] Waiting for caches to sync for tokens
I0622 01:02:29.337432       1 reflector.go:219] Starting reflector *v1.Node (22h11m42.415259455s) from vendor/k8s.io/client-go/informers/factory.go:134
I0622 01:02:29.337674       1 reflector.go:255] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
W0622 01:02:29.380225       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0622 01:02:29.380650       1 controllermanager.go:568] Starting "podgc"
I0622 01:02:29.383665       1 controllermanager.go:597] Started "podgc"
I0622 01:02:29.383689       1 controllermanager.go:568] Starting "csrcleaner"
I0622 01:02:29.383890       1 gc_controller.go:92] Starting GC controller
I0622 01:02:29.383973       1 shared_informer.go:255] Waiting for caches to sync for GC
I0622 01:02:29.386258       1 controllermanager.go:597] Started "csrcleaner"
... skipping 185 lines ...
I0622 01:02:29.553734       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0622 01:02:29.553757       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0622 01:02:29.553775       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
I0622 01:02:29.553792       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0622 01:02:29.553812       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0622 01:02:29.553830       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0622 01:02:29.553884       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0622 01:02:29.553900       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0622 01:02:29.554044       1 controllermanager.go:597] Started "attachdetach"
I0622 01:02:29.554059       1 controllermanager.go:568] Starting "ttl-after-finished"
I0622 01:02:29.554493       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-control-plane-89kp4"
W0622 01:02:29.554793       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7hzix5-control-plane-89kp4" does not exist
I0622 01:02:29.554718       1 attach_detach_controller.go:328] Starting attach detach controller
I0622 01:02:29.555152       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0622 01:02:29.557942       1 controllermanager.go:597] Started "ttl-after-finished"
I0622 01:02:29.558064       1 controllermanager.go:568] Starting "replicationcontroller"
I0622 01:02:29.558387       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0622 01:02:29.558456       1 shared_informer.go:255] Waiting for caches to sync for TTL after finished
... skipping 20 lines ...
I0622 01:02:29.646787       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0622 01:02:29.646803       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0622 01:02:29.646820       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0622 01:02:29.646836       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0622 01:02:29.646859       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0622 01:02:29.646880       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0622 01:02:29.646909       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0622 01:02:29.646924       1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0622 01:02:29.647010       1 controllermanager.go:597] Started "persistentvolume-binder"
I0622 01:02:29.647061       1 controllermanager.go:568] Starting "horizontalpodautoscaling"
I0622 01:02:29.647173       1 pv_controller_base.go:311] Starting persistent volume controller
I0622 01:02:29.647219       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0622 01:02:29.875400       1 controllermanager.go:597] Started "horizontalpodautoscaling"
... skipping 929 lines ...
I0622 01:03:41.109318       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/coredns-8c797478b"
I0622 01:03:41.109537       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-06-22 01:03:41.10951401 +0000 UTC m=+153.037777186"
I0622 01:03:41.109340       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-8c797478b", timestamp:time.Time{wall:0xc0a4b869a10de6af, ext:82482822627, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:41.110210       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-8c797478b" (875.659µs)
I0622 01:03:41.120625       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (69.542303ms)
I0622 01:03:41.120911       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (25.302µs)
W0622 01:03:41.121117       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
I0622 01:03:41.121441       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0622 01:03:41.121767       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="12.236527ms"
I0622 01:03:41.121942       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-06-22 01:03:41.121804941 +0000 UTC m=+153.050068217"
I0622 01:03:41.122996       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-06-22 01:03:41 +0000 UTC - now: 2022-06-22 01:03:41.122982921 +0000 UTC m=+153.051246397]
I0622 01:03:41.123045       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0622 01:03:41.123167       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.317889ms"
... skipping 47 lines ...
I0622 01:03:45.467456       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0622 01:03:45.467467       1 controller.go:808] Finished updateLoadBalancerHosts
I0622 01:03:45.467477       1 controller.go:735] It took 3.9703e-05 seconds to finish nodeSyncInternal
I0622 01:03:45.467574       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-7hzix5-mp-0000000}
I0622 01:03:45.467590       1 taint_manager.go:451] "Updating known taints on node" node="capz-7hzix5-mp-0000000" taints=[]
I0622 01:03:45.468156       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-mp-0000000"
W0622 01:03:45.468176       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7hzix5-mp-0000000" does not exist
I0622 01:03:45.468199       1 topologycache.go:179] Ignoring node capz-7hzix5-control-plane-89kp4 because it has an excluded label
I0622 01:03:45.468334       1 topologycache.go:183] Ignoring node capz-7hzix5-mp-0000000 because it is not ready: [{MemoryPressure False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-7hzix5-mp-0000000" not found]}]
I0622 01:03:45.468454       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0622 01:03:45.468958       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b879b96a23ce, ext:146891520470, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:45.469035       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b87c5bf4d2db, ext:157397292871, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:45.469047       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-7hzix5-mp-0000000], creating 1
I0622 01:03:45.491610       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0622 01:03:45.501818       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-gb77z" podUID=86125d51-92e8-4142-9fa3-1087017011e6
... skipping 89 lines ...
I0622 01:03:46.641057       1 controller.go:808] Finished updateLoadBalancerHosts
I0622 01:03:46.641068       1 controller.go:735] It took 2.3501e-05 seconds to finish nodeSyncInternal
I0622 01:03:46.641144       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-7hzix5-mp-0000001}
I0622 01:03:46.642163       1 taint_manager.go:451] "Updating known taints on node" node="capz-7hzix5-mp-0000001" taints=[]
I0622 01:03:46.643392       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b87c6262a69c, ext:157505153800, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:46.644835       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-mp-0000001"
W0622 01:03:46.644883       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7hzix5-mp-0000001" does not exist
I0622 01:03:46.650959       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b87ca6cc7059, ext:158579195989, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:46.650999       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-7hzix5-mp-0000001], creating 1
I0622 01:03:46.651588       1 topologycache.go:179] Ignoring node capz-7hzix5-control-plane-89kp4 because it has an excluded label
I0622 01:03:46.651617       1 topologycache.go:183] Ignoring node capz-7hzix5-mp-0000000 because it is not ready: [{MemoryPressure False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:03:45 +0000 UTC 2022-06-22 01:03:45 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-7hzix5-mp-0000000" not found]}]
I0622 01:03:46.651670       1 topologycache.go:183] Ignoring node capz-7hzix5-mp-0000001 because it is not ready: [{MemoryPressure False 2022-06-22 01:03:46 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:03:46 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:03:46 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:03:46 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-7hzix5-mp-0000001" not found]}]
I0622 01:03:46.651859       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0622 01:03:46.652785       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4b87c61f6e657, ext:157498092427, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:46.652887       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4b87ca6ea25ef, ext:158581143319, loc:(*time.Location)(0x6f121e0)}}
I0622 01:03:46.653157       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-7hzix5-mp-0000001], creating 1
I0622 01:03:46.679886       1 controller_utils.go:581] Controller calico-node created pod calico-node-9ssh8
I0622 01:03:46.679941       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
... skipping 386 lines ...
I0622 01:04:16.164853       1 controller.go:697] Ignoring node capz-7hzix5-mp-0000001 with Ready condition status False
I0622 01:04:16.165574       1 controller.go:265] Node changes detected, triggering a full node sync on all loadbalancer services
I0622 01:04:16.165848       1 controller.go:272] Triggering nodeSync
I0622 01:04:16.165348       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-7hzix5-mp-0000000"
I0622 01:04:16.165455       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-mp-0000000"
I0622 01:04:16.165537       1 topologycache.go:179] Ignoring node capz-7hzix5-control-plane-89kp4 because it has an excluded label
I0622 01:04:16.167699       1 topologycache.go:183] Ignoring node capz-7hzix5-mp-0000001 because it is not ready: [{MemoryPressure False 2022-06-22 01:04:07 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:04:07 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:04:07 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:04:07 +0000 UTC 2022-06-22 01:03:46 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0622 01:04:16.167793       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0622 01:04:16.166036       1 controller.go:291] nodeSync has been triggered
I0622 01:04:16.167818       1 controller.go:757] Syncing backends for all LB services.
I0622 01:04:16.167893       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0622 01:04:16.167906       1 controller.go:808] Finished updateLoadBalancerHosts
I0622 01:04:16.167962       1 controller.go:764] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
... skipping 36 lines ...
I0622 01:04:17.379044       1 controller.go:735] It took 4.2403e-05 seconds to finish nodeSyncInternal
I0622 01:04:17.379129       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-mp-0000001"
I0622 01:04:17.379195       1 topologycache.go:179] Ignoring node capz-7hzix5-control-plane-89kp4 because it has an excluded label
I0622 01:04:17.393501       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-mp-0000001"
I0622 01:04:17.394927       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-7hzix5-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0622 01:04:19.462569       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="139.109µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:49880" resp=200
I0622 01:04:20.531535       1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-7hzix5-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 01:04:06 +0000 UTC,LastTransitionTime:2022-06-22 01:03:45 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:04:16 +0000 UTC,LastTransitionTime:2022-06-22 01:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0622 01:04:20.532229       1 node_lifecycle_controller.go:1052] Node capz-7hzix5-mp-0000000 ReadyCondition updated. Updating timestamp.
I0622 01:04:20.553964       1 node_lifecycle_controller.go:898] Node capz-7hzix5-mp-0000000 is healthy again, removing all taints
I0622 01:04:20.556666       1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-7hzix5-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 01:04:07 +0000 UTC,LastTransitionTime:2022-06-22 01:03:46 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:04:17 +0000 UTC,LastTransitionTime:2022-06-22 01:04:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0622 01:04:20.556790       1 node_lifecycle_controller.go:1052] Node capz-7hzix5-mp-0000001 ReadyCondition updated. Updating timestamp.
I0622 01:04:20.557397       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-7hzix5-mp-0000000}
I0622 01:04:20.557431       1 taint_manager.go:451] "Updating known taints on node" node="capz-7hzix5-mp-0000000" taints=[]
I0622 01:04:20.557478       1 taint_manager.go:472] "All taints were removed from the node. Cancelling all evictions..." node="capz-7hzix5-mp-0000000"
I0622 01:04:20.558153       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7hzix5-mp-0000000"
I0622 01:04:20.569916       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-7hzix5-mp-0000001}
... skipping 16 lines ...
I0622 01:04:21.184815       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-8565959cf4" need=2 creating=2
I0622 01:04:21.186031       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-8565959cf4 to 2"
I0622 01:04:21.204602       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-8565959cf4-9wbmn" podUID=7b8f1bf8-72a6-4fab-a709-e14531b1d822
I0622 01:04:21.204906       1 disruption.go:426] addPod called on pod "csi-azurefile-controller-8565959cf4-9wbmn"
I0622 01:04:21.206041       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-9wbmn, PodDisruptionBudget controller will avoid syncing.
I0622 01:04:21.204961       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-9wbmn"
I0622 01:04:21.205022       1 replica_set.go:394] Pod csi-azurefile-controller-8565959cf4-9wbmn created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-8565959cf4-9wbmn", GenerateName:"csi-azurefile-controller-8565959cf4-", Namespace:"kube-system", SelfLink:"", UID:"7b8f1bf8-72a6-4fab-a709-e14531b1d822", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 1, 4, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"8565959cf4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-8565959cf4", UID:"11717453-0f4c-4da4-8b55-b2b307c074d0", Controller:(*bool)(0xc00272d147), BlockOwnerDeletion:(*bool)(0xc00272d148)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 1, 4, 21, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00268a408), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00268a420), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00268a438), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-7d4dd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0020b5c80), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7d4dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7d4dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7d4dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7d4dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7d4dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0020b5da0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7d4dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002a442c0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00272d4f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003d9500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00272d560)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00272d580)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc00272d588), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00272d58c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0029bb6a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 01:04:21.205489       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0622 01:04:21.205793       1 controller_utils.go:581] Controller csi-azurefile-controller-8565959cf4 created pod csi-azurefile-controller-8565959cf4-9wbmn
I0622 01:04:21.208812       1 disruption.go:429] No matching pdb for pod "csi-azurefile-controller-8565959cf4-9wbmn"
I0622 01:04:21.209141       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4b8854afa64eb, ext:193112445671, loc:(*time.Location)(0x6f121e0)}}
I0622 01:04:21.209422       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-8565959cf4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-8565959cf4-9wbmn"
I0622 01:04:21.211625       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-06-22 01:04:21.18571076 +0000 UTC m=+193.113974136 - now: 2022-06-22 01:04:21.211612657 +0000 UTC m=+193.139875933]
... skipping 7 lines ...
I0622 01:04:21.232008       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-8565959cf4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-8565959cf4-n4rfr"
I0622 01:04:21.238682       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-8565959cf4-n4rfr" podUID=7c408c9a-4316-4bea-af91-dee224b72161
I0622 01:04:21.239861       1 disruption.go:426] addPod called on pod "csi-azurefile-controller-8565959cf4-n4rfr"
I0622 01:04:21.240180       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-n4rfr, PodDisruptionBudget controller will avoid syncing.
I0622 01:04:21.240409       1 disruption.go:429] No matching pdb for pod "csi-azurefile-controller-8565959cf4-n4rfr"
I0622 01:04:21.240778       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-n4rfr"
I0622 01:04:21.241134       1 replica_set.go:394] Pod csi-azurefile-controller-8565959cf4-n4rfr created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-8565959cf4-n4rfr", GenerateName:"csi-azurefile-controller-8565959cf4-", Namespace:"kube-system", SelfLink:"", UID:"7c408c9a-4316-4bea-af91-dee224b72161", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 1, 4, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"8565959cf4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-8565959cf4", UID:"11717453-0f4c-4da4-8b55-b2b307c074d0", Controller:(*bool)(0xc0028d1817), BlockOwnerDeletion:(*bool)(0xc0028d1818)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 1, 4, 21, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00268b4a0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00268b4b8), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00268b4d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-lhxph", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002078640), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lhxph", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lhxph", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lhxph", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lhxph", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lhxph", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002078760)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-lhxph", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002a45e80), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028d1bc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000b4af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028d1c20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028d1c40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0028d1c48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028d1c4c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002a5cae0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 01:04:21.246197       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4b8854afa64eb, ext:193112445671, loc:(*time.Location)(0x6f121e0)}}
I0622 01:04:21.250357       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="73.247204ms"
I0622 01:04:21.250699       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0622 01:04:21.251063       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-06-22 01:04:21.251038328 +0000 UTC m=+193.179301504"
I0622 01:04:21.253189       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-06-22 01:04:21 +0000 UTC - now: 2022-06-22 01:04:21.253177519 +0000 UTC m=+193.181440895]
I0622 01:04:21.274828       1 disruption.go:438] updatePod called on pod "csi-azurefile-controller-8565959cf4-n4rfr"
I0622 01:04:21.275179       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-n4rfr, PodDisruptionBudget controller will avoid syncing.
I0622 01:04:21.275454       1 disruption.go:441] No matching pdb for pod "csi-azurefile-controller-8565959cf4-n4rfr"
I0622 01:04:21.275789       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-n4rfr"
... skipping 214 lines ...
I0622 01:04:25.507775       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-snapshot-controller-789545b454-wqswk"
I0622 01:04:25.509951       1 controller_utils.go:581] Controller csi-snapshot-controller-789545b454 created pod csi-snapshot-controller-789545b454-wqswk
I0622 01:04:25.510435       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-789545b454" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-789545b454-wqswk"
I0622 01:04:25.507855       1 replica_set.go:394] Pod csi-snapshot-controller-789545b454-wqswk created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-snapshot-controller-789545b454-wqswk", GenerateName:"csi-snapshot-controller-789545b454-", Namespace:"kube-system", SelfLink:"", UID:"91baf54a-2f23-4406-9305-dd7248508e94", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 1, 4, 25, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-snapshot-controller", "pod-template-hash":"789545b454"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-snapshot-controller-789545b454", UID:"e5fd4e87-2233-4d56-95e7-b40827d2ee5d", Controller:(*bool)(0xc001ac1427), BlockOwnerDeletion:(*bool)(0xc001ac1428)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 1, 4, 25, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e774a0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-dmlqs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000ef4ae0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-snapshot-controller", Image:"mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1", Command:[]string(nil), Args:[]string{"--v=2", "--leader-election=true", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dmlqs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ac14c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-snapshot-controller-sa", DeprecatedServiceAccount:"csi-snapshot-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000b9f80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ac1530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ac1550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001ac1558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ac155c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001d73db0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 01:04:25.510705       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-snapshot-controller-789545b454", timestamp:time.Time{wall:0xc0a4b8865d014de8, ext:197414888621, loc:(*time.Location)(0x6f121e0)}}
I0622 01:04:25.518883       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="49.634895ms"
I0622 01:04:25.518920       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0622 01:04:25.518967       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-06-22 01:04:25.518949125 +0000 UTC m=+197.447212401"
I0622 01:04:25.519390       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-06-22 01:04:25 +0000 UTC - now: 2022-06-22 01:04:25.519380053 +0000 UTC m=+197.447643329]
I0622 01:04:25.526681       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-789545b454-zfnnr" podUID=a3229c2b-9f01-42d8-a7b3-0838923df1a9
I0622 01:04:25.526726       1 disruption.go:426] addPod called on pod "csi-snapshot-controller-789545b454-zfnnr"
I0622 01:04:25.526761       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-snapshot-controller-789545b454-zfnnr, PodDisruptionBudget controller will avoid syncing.
I0622 01:04:25.526771       1 disruption.go:429] No matching pdb for pod "csi-snapshot-controller-789545b454-zfnnr"
... skipping 1582 lines ...
I0622 01:08:41.510327       1 disruption.go:438] updatePod called on pod "azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g"
I0622 01:08:41.511057       1 disruption.go:501] No PodDisruptionBudgets found for pod azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g, PodDisruptionBudget controller will avoid syncing.
I0622 01:08:41.511308       1 disruption.go:441] No matching pdb for pod "azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g"
I0622 01:08:41.510679       1 taint_manager.go:411] "Noticed pod update" pod="azurefile-1563/azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g"
I0622 01:08:41.510904       1 replica_set.go:457] Pod azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g updated, objectMeta {Name:azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g GenerateName:azurefile-volume-tester-vnltm-85f8b7cbcf- Namespace:azurefile-1563 SelfLink: UID:47f3d993-44d4-45b2-92c7-3e4a5fefab4c ResourceVersion:1982 Generation:0 CreationTimestamp:2022-06-22 01:08:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5199948958991797301 pod-template-hash:85f8b7cbcf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-vnltm-85f8b7cbcf UID:25466f71-4464-4cae-b7ce-ebd898e0324c Controller:0xc0028d0047 BlockOwnerDeletion:0xc0028d0048}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:08:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25466f71-4464-4cae-b7ce-ebd898e0324c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azurefile-volume-tester-vnltm-85f8b7cbcf-bqz2g GenerateName:azurefile-volume-tester-vnltm-85f8b7cbcf- Namespace:azurefile-1563 SelfLink: UID:47f3d993-44d4-45b2-92c7-3e4a5fefab4c ResourceVersion:1985 Generation:0 CreationTimestamp:2022-06-22 01:08:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5199948958991797301 pod-template-hash:85f8b7cbcf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-vnltm-85f8b7cbcf UID:25466f71-4464-4cae-b7ce-ebd898e0324c Controller:0xc00272d5de BlockOwnerDeletion:0xc00272d5df}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:08:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25466f71-4464-4cae-b7ce-ebd898e0324c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]}.
I0622 01:08:41.518640       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-vnltm" duration="45.623129ms"
I0622 01:08:41.518889       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-vnltm" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-vnltm\": the object has been modified; please apply your changes to the latest version and try again"
I0622 01:08:41.519088       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-vnltm" startTime="2022-06-22 01:08:41.519068158 +0000 UTC m=+453.447331334"
I0622 01:08:41.519667       1 deployment_util.go:774] Deployment "azurefile-volume-tester-vnltm" timed out (false) [last progress check: 2022-06-22 01:08:41 +0000 UTC - now: 2022-06-22 01:08:41.519655296 +0000 UTC m=+453.447918772]
I0622 01:08:41.526420       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-1563/azurefile-volume-tester-vnltm"
I0622 01:08:41.526721       1 replica_set_utils.go:59] Updating status for : azurefile-1563/azurefile-volume-tester-vnltm-85f8b7cbcf, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0622 01:08:41.527162       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-vnltm" duration="8.078718ms"
I0622 01:08:41.527392       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-vnltm" startTime="2022-06-22 01:08:41.527371791 +0000 UTC m=+453.455635067"
... skipping 1249 lines ...

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380

Ran 6 of 34 Specs in 313.850 seconds
FAIL! -- 5 Passed | 1 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 5 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

--- FAIL: TestE2E (313.86s)
FAIL
FAIL	sigs.k8s.io/azurefile-csi-driver/test/e2e	313.909s
FAIL
make: *** [Makefile:85: e2e-test] Error 1
NAME                              STATUS   ROLES           AGE     VERSION                             INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
capz-7hzix5-control-plane-89kp4   Ready    control-plane   10m     v1.25.0-alpha.1.67+9e320e27222c5b   10.0.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-7hzix5-mp-0000000            Ready    <none>          7m33s   v1.25.0-alpha.1.67+9e320e27222c5b   10.1.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-7hzix5-mp-0000001            Ready    <none>          7m32s   v1.25.0-alpha.1.67+9e320e27222c5b   10.1.0.5      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
NAMESPACE     NAME                                                      READY   STATUS    RESTARTS        AGE     IP                NODE                              NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-57cb778775-vkzfn                  1/1     Running   0               8m47s   192.168.196.1     capz-7hzix5-control-plane-89kp4   <none>           <none>
... skipping 114 lines ...