This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: add migration flag in Azure volume CSI migration logic
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2022-06-22 05:34
Elapsed27m42s
Revision0c87093592273929413bf028971f92dc9c920f69
Refs 108317

Test Failures


AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] 47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureFile\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\sshould\screate\sa\svolume\son\sdemand\sand\sresize\sit\s\[kubernetes\.io\/azure\-file\]\s\[file\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:356
Jun 22 05:57:47.161: newPVCSize(11Gi) is not equal to newPVSize(10GiGi)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 683 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 223 lines ...
Dynamic Provisioning 
  should create a storage account with tags [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:73
STEP: Creating a kubernetes client
Jun 22 05:52:35.263: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Jun 22 05:52:35.981: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/06/22 05:52:36 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/06/22 05:52:36 Check successfully
... skipping 43 lines ...
Jun 22 05:52:57.653: INFO: PersistentVolumeClaim pvc-c9phv found but phase is Pending instead of Bound.
Jun 22 05:52:59.762: INFO: PersistentVolumeClaim pvc-c9phv found and phase=Bound (21.187592467s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jun 22 05:53:00.085: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-6vjrb" in namespace "azurefile-2540" to be "Succeeded or Failed"
Jun 22 05:53:00.192: INFO: Pod "azurefile-volume-tester-6vjrb": Phase="Pending", Reason="", readiness=false. Elapsed: 106.679702ms
Jun 22 05:53:02.307: INFO: Pod "azurefile-volume-tester-6vjrb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221387287s
Jun 22 05:53:04.422: INFO: Pod "azurefile-volume-tester-6vjrb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337101993s
Jun 22 05:53:06.538: INFO: Pod "azurefile-volume-tester-6vjrb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.452810682s
STEP: Saw pod success
Jun 22 05:53:06.538: INFO: Pod "azurefile-volume-tester-6vjrb" satisfied condition "Succeeded or Failed"
Jun 22 05:53:06.538: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-6vjrb"
Jun 22 05:53:06.665: INFO: Pod azurefile-volume-tester-6vjrb has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-6vjrb in namespace azurefile-2540
Jun 22 05:53:06.784: INFO: deleting PVC "azurefile-2540"/"pvc-c9phv"
Jun 22 05:53:06.784: INFO: Deleting PersistentVolumeClaim "pvc-c9phv"
... skipping 155 lines ...
Jun 22 05:55:02.902: INFO: PersistentVolumeClaim pvc-ghps7 found but phase is Pending instead of Bound.
Jun 22 05:55:05.009: INFO: PersistentVolumeClaim pvc-ghps7 found and phase=Bound (21.187949163s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Jun 22 05:55:05.333: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-9hd22" in namespace "azurefile-2790" to be "Error status code"
Jun 22 05:55:05.440: INFO: Pod "azurefile-volume-tester-9hd22": Phase="Pending", Reason="", readiness=false. Elapsed: 106.860128ms
Jun 22 05:55:07.554: INFO: Pod "azurefile-volume-tester-9hd22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220405385s
Jun 22 05:55:09.667: INFO: Pod "azurefile-volume-tester-9hd22": Phase="Failed", Reason="", readiness=false. Elapsed: 4.333809195s
STEP: Saw pod failure
Jun 22 05:55:09.667: INFO: Pod "azurefile-volume-tester-9hd22" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Jun 22 05:55:09.787: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-9hd22"
Jun 22 05:55:09.897: INFO: Pod azurefile-volume-tester-9hd22 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-9hd22 in namespace azurefile-2790
Jun 22 05:55:10.018: INFO: deleting PVC "azurefile-2790"/"pvc-ghps7"
... skipping 180 lines ...
Jun 22 05:57:09.953: INFO: PersistentVolumeClaim pvc-5zsjv found but phase is Pending instead of Bound.
Jun 22 05:57:12.061: INFO: PersistentVolumeClaim pvc-5zsjv found and phase=Bound (2.215680592s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jun 22 05:57:12.391: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-l8sxq" in namespace "azurefile-4538" to be "Succeeded or Failed"
Jun 22 05:57:12.500: INFO: Pod "azurefile-volume-tester-l8sxq": Phase="Pending", Reason="", readiness=false. Elapsed: 109.360473ms
Jun 22 05:57:14.615: INFO: Pod "azurefile-volume-tester-l8sxq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224042419s
Jun 22 05:57:16.727: INFO: Pod "azurefile-volume-tester-l8sxq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336666119s
STEP: Saw pod success
Jun 22 05:57:16.727: INFO: Pod "azurefile-volume-tester-l8sxq" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
Jun 22 05:57:47.161: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi)

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c
sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f
... skipping 22 lines ...
Jun 22 05:57:53.062: INFO: At 2022-06-22 05:57:12 +0000 UTC - event for azurefile-volume-tester-l8sxq: {default-scheduler } Scheduled: Successfully assigned azurefile-4538/azurefile-volume-tester-l8sxq to capz-hly3cw-md-0-w2swq
Jun 22 05:57:53.062: INFO: At 2022-06-22 05:57:13 +0000 UTC - event for azurefile-volume-tester-l8sxq: {kubelet capz-hly3cw-md-0-w2swq} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine
Jun 22 05:57:53.063: INFO: At 2022-06-22 05:57:13 +0000 UTC - event for azurefile-volume-tester-l8sxq: {kubelet capz-hly3cw-md-0-w2swq} Created: Created container volume-tester
Jun 22 05:57:53.063: INFO: At 2022-06-22 05:57:13 +0000 UTC - event for azurefile-volume-tester-l8sxq: {kubelet capz-hly3cw-md-0-w2swq} Started: Started container volume-tester
Jun 22 05:57:53.063: INFO: At 2022-06-22 05:57:16 +0000 UTC - event for pvc-5zsjv: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc
Jun 22 05:57:53.063: INFO: At 2022-06-22 05:57:16 +0000 UTC - event for pvc-5zsjv: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-715317c6-9723-4c1e-afab-13857a91054a
Jun 22 05:57:53.063: INFO: At 2022-06-22 05:57:16 +0000 UTC - event for pvc-5zsjv: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-715317c6-9723-4c1e-afab-13857a91054a" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-hly3cw#ff988a188fa424b62bc7a18#pvc-715317c6-9723-4c1e-afab-13857a91054a#pvc-715317c6-9723-4c1e-afab-13857a91054a#azurefile-4538) is not supported on ControllerExpandVolume
Jun 22 05:57:53.170: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jun 22 05:57:53.170: INFO: 
Jun 22 05:57:53.322: INFO: 
Logging node info for node capz-hly3cw-control-plane-hwctj
Jun 22 05:57:53.445: INFO: Node Info: &Node{ObjectMeta:{capz-hly3cw-control-plane-hwctj    055007a2-4f0e-458f-b826-275292fdb2c6 2391 0 2022-06-22 05:45:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:westeurope-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-hly3cw-control-plane-hwctj kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:westeurope-1] map[cluster.x-k8s.io/cluster-name:capz-hly3cw cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-hly3cw-control-plane-5zf9c cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-hly3cw-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-hly3cw-control-plane-hwctj"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.151.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2022-06-22 05:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-22 05:45:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-22 05:47:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 05:47:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-22 05:48:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 05:56:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hly3cw/providers/Microsoft.Compute/virtualMachines/capz-hly3cw-control-plane-hwctj,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 05:48:06 +0000 UTC,LastTransitionTime:2022-06-22 05:48:06 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 05:56:00 +0000 UTC,LastTransitionTime:2022-06-22 05:45:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 05:56:00 +0000 UTC,LastTransitionTime:2022-06-22 05:45:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 05:56:00 +0000 UTC,LastTransitionTime:2022-06-22 05:45:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 05:56:00 +0000 UTC,LastTransitionTime:2022-06-22 05:47:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-hly3cw-control-plane-hwctj,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:090b8c26e1214e96992902790a405371,SystemUUID:6bb677e5-685c-0843-bb43-92a409ff9f09,BootID:694fdd79-203f-403d-aa75-eb459bab17a5,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+a3dc67c38b3609,KubeProxyVersion:v1.25.0-alpha.1.67+a3dc67c38b3609,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:39501134,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:1fd411e34636f0d08820f0c39c8a0c0aa7b04e4e989f0942f1390805e66fadbf capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_a3dc67c38b3609],SizeBytes:39499404,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:33779236,},ContainerImage{Names:[capzci.azurecr.io/kube-apiserver@sha256:91c724d0a0dd77c41efb3f635010f097fdb9049f83aae1a489736b66f4d71564 capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_a3dc67c38b3609],SizeBytes:33777558,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:31010080,},ContainerImage{Names:[capzci.azurecr.io/kube-controller-manager@sha256:bed0e268b6702d366e6e0793138d715c34b53fb690b2c0bda0e988dc0051ac6d capzci.azurecr.io/kube-controller-manager:v1.25.0-alpha.1.67_a3dc67c38b3609],SizeBytes:31007104,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:15533645,},ContainerImage{Names:[capzci.azurecr.io/kube-scheduler@sha256:f3cbbb4e0647ca8ce85c13dcab0bbb1510302742c15467cb7457d52c0884d7d0 capzci.azurecr.io/kube-scheduler:v1.25.0-alpha.1.67_a3dc67c38b3609],SizeBytes:15532193,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 22 05:57:53.445: INFO: 
... skipping 940 lines ...
I0622 05:46:13.590612       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-06-22 05:38:18 +0000 UTC to 2032-06-19 05:43:18 +0000 UTC (now=2022-06-22 05:46:13.590582532 +0000 UTC))"
I0622 05:46:13.590840       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1655876773\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1655876772\" (2022-06-22 04:46:12 +0000 UTC to 2023-06-22 04:46:12 +0000 UTC (now=2022-06-22 05:46:13.590806237 +0000 UTC))"
I0622 05:46:13.591063       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1655876773\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1655876773\" (2022-06-22 04:46:13 +0000 UTC to 2023-06-22 04:46:13 +0000 UTC (now=2022-06-22 05:46:13.591030342 +0000 UTC))"
I0622 05:46:13.591095       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0622 05:46:13.591813       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0622 05:46:13.592255       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
E0622 05:46:13.593576       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:13.593605       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:46:13.593757       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0622 05:46:13.593944       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0622 05:46:16.168259       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:16.168301       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:18.373863       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:18.373899       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:21.522189       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:21.522222       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:23.845462       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:23.845502       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:27.545368       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:27.545404       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:46:31.573974       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="109.903µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:40948" resp=200
E0622 05:46:31.855814       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:31.855850       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:35.583269       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:35.583308       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:39.354724       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:39.354956       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:46:41.573009       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="72.802µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:41020" resp=200
E0622 05:46:42.697480       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:42.697515       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:46.253027       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:46.253065       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:49.312184       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:49.312219       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:46:51.572673       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="94.703µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:41066" resp=200
E0622 05:46:52.542475       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:52.542509       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:54.576114       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:54.576149       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:46:57.083773       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:46:57.083811       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:47:00.774120       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:47:00.774154       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:01.572934       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="65.802µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:41142" resp=200
E0622 05:47:03.843246       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:47:03.843286       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:47:06.498368       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:47:06.498440       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:47:09.826993       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:47:09.827036       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:11.573774       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="69.702µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:41188" resp=200
E0622 05:47:12.229831       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:47:12.229871       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:47:14.407501       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused
I0622 05:47:14.407538       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0622 05:47:20.165383       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0622 05:47:20.165644       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:21.571888       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="74.701µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:41388" resp=200
I0622 05:47:24.356835       1 leaderelection.go:352] lock is held by capz-hly3cw-control-plane-hwctj_41b3315d-79e7-4037-be51-e5c8c04c67f7 and has not yet expired
I0622 05:47:24.356863       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:27.791484       1 leaderelection.go:352] lock is held by capz-hly3cw-control-plane-hwctj_41b3315d-79e7-4037-be51-e5c8c04c67f7 and has not yet expired
I0622 05:47:27.791512       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:31.572538       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="87.403µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:41460" resp=200
I0622 05:47:31.703563       1 leaderelection.go:352] lock is held by capz-hly3cw-control-plane-hwctj_41b3315d-79e7-4037-be51-e5c8c04c67f7 and has not yet expired
I0622 05:47:31.703594       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:35.913416       1 leaderelection.go:352] lock is held by capz-hly3cw-control-plane-hwctj_41b3315d-79e7-4037-be51-e5c8c04c67f7 and has not yet expired
I0622 05:47:35.913440       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0622 05:47:39.637804       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0622 05:47:39.638592       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-hly3cw-control-plane-hwctj_1699eb4a-30ad-47ae-b3ab-7b1fa5903f4f became leader"
I0622 05:47:39.770068       1 request.go:533] Waited for 83.961851ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1
I0622 05:47:39.820095       1 request.go:533] Waited for 134.033512ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiregistration.k8s.io/v1
I0622 05:47:39.869678       1 request.go:533] Waited for 183.59106ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apps/v1
I0622 05:47:39.919664       1 request.go:533] Waited for 233.545217ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/events.k8s.io/v1
... skipping 56 lines ...
I0622 05:47:40.974785       1 reflector.go:219] Starting reflector *v1.Node (12h51m2.014192758s) from vendor/k8s.io/client-go/informers/factory.go:134
I0622 05:47:40.974798       1 reflector.go:255] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0622 05:47:40.975083       1 reflector.go:219] Starting reflector *v1.ServiceAccount (12h51m2.014192758s) from vendor/k8s.io/client-go/informers/factory.go:134
I0622 05:47:40.975093       1 reflector.go:255] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0622 05:47:40.975335       1 reflector.go:219] Starting reflector *v1.Secret (12h51m2.014192758s) from vendor/k8s.io/client-go/informers/factory.go:134
I0622 05:47:40.975346       1 reflector.go:255] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
W0622 05:47:40.995307       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0622 05:47:40.995525       1 controllermanager.go:568] Starting "csrsigning"
I0622 05:47:40.997724       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0622 05:47:40.997976       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
I0622 05:47:40.998136       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0622 05:47:40.998141       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0622 05:47:40.998088       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
... skipping 65 lines ...
I0622 05:47:41.045426       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0622 05:47:41.045484       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0622 05:47:41.045508       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0622 05:47:41.045563       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0622 05:47:41.045585       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0622 05:47:41.045653       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0622 05:47:41.045801       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0622 05:47:41.045820       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0622 05:47:41.046237       1 controllermanager.go:597] Started "attachdetach"
I0622 05:47:41.046258       1 controllermanager.go:568] Starting "resourcequota"
I0622 05:47:41.047284       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-control-plane-hwctj"
W0622 05:47:41.047506       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-hly3cw-control-plane-hwctj" does not exist
I0622 05:47:41.047816       1 attach_detach_controller.go:328] Starting attach detach controller
I0622 05:47:41.047833       1 shared_informer.go:255] Waiting for caches to sync for attach detach
E0622 05:47:41.076201       1 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0622 05:47:41.076301       1 shared_informer.go:285] caches populated
I0622 05:47:41.076319       1 shared_informer.go:262] Caches are synced for tokens
I0622 05:47:41.076255       1 resource_quota_monitor.go:181] QuotaMonitor using a shared informer for resource "rbac.authorization.k8s.io/v1, Resource=rolebindings"
... skipping 93 lines ...
I0622 05:47:41.111499       1 graph_builder.go:289] GraphBuilder running
I0622 05:47:41.126779       1 controllermanager.go:597] Started "daemonset"
I0622 05:47:41.127000       1 controllermanager.go:568] Starting "statefulset"
I0622 05:47:41.127118       1 daemon_controller.go:291] Starting daemon sets controller
I0622 05:47:41.127131       1 shared_informer.go:255] Waiting for caches to sync for daemon sets
I0622 05:47:41.175491       1 request.go:533] Waited for 62.72622ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector/token
W0622 05:47:41.201648       1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0622 05:47:41.201998       1 garbagecollector.go:223] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments], removed: []
I0622 05:47:41.202017       1 garbagecollector.go:229] reset restmapper
I0622 05:47:41.225052       1 request.go:533] Waited for 97.980644ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/statefulset-controller
I0622 05:47:41.231348       1 controllermanager.go:597] Started "statefulset"
I0622 05:47:41.231751       1 controllermanager.go:568] Starting "tokencleaner"
I0622 05:47:41.231714       1 stateful_set.go:154] Starting stateful set controller
... skipping 17 lines ...
I0622 05:47:41.328119       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0622 05:47:41.328132       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0622 05:47:41.328143       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0622 05:47:41.328156       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0622 05:47:41.328176       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0622 05:47:41.328190       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0622 05:47:41.328217       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0622 05:47:41.328227       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0622 05:47:41.328293       1 controllermanager.go:597] Started "persistentvolume-binder"
I0622 05:47:41.328308       1 controllermanager.go:568] Starting "ephemeral-volume"
I0622 05:47:41.328371       1 pv_controller_base.go:311] Starting persistent volume controller
I0622 05:47:41.328418       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0622 05:47:41.377731       1 controllermanager.go:597] Started "ephemeral-volume"
... skipping 790 lines ...
I0622 05:48:11.983256       1 controller_utils.go:206] Controller kube-system/coredns-8c797478b either never recorded expectations, or the ttl expired.
I0622 05:48:11.983392       1 replica_set_utils.go:59] Updating status for : kube-system/coredns-8c797478b, replicas 2->2 (need 2), fullyLabeledReplicas 2->2, readyReplicas 0->1, availableReplicas 0->1, sequence No: 1->1
I0622 05:48:11.983858       1 disruption.go:501] No PodDisruptionBudgets found for pod coredns-8c797478b-bs8h6, PodDisruptionBudget controller will avoid syncing.
I0622 05:48:11.983878       1 disruption.go:441] No matching pdb for pod "coredns-8c797478b-bs8h6"
I0622 05:48:11.989362       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (39.252458ms)
I0622 05:48:11.989526       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (115.803µs)
W0622 05:48:11.989552       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
I0622 05:48:11.990176       1 endpointslicemirroring_controller.go:278] syncEndpoints("kube-system/kube-dns")
I0622 05:48:11.990203       1 endpointslicemirroring_controller.go:313] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
I0622 05:48:11.990282       1 endpointslicemirroring_controller.go:275] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (108.402µs)
I0622 05:48:11.995556       1 endpoints_controller.go:369] Finished syncing service "kube-system/kube-dns" endpoints. (42.222816ms)
I0622 05:48:11.995811       1 endpoints_controller.go:528] Update endpoints for kube-system/kube-dns, ready: 3 not ready: 0
I0622 05:48:12.002854       1 endpoints_controller.go:369] Finished syncing service "kube-system/kube-dns" endpoints. (7.25454ms)
... skipping 13 lines ...
I0622 05:48:12.036710       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="621.312µs"
I0622 05:48:12.109235       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0622 05:48:12.139949       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0622 05:48:12.144016       1 pv_controller_base.go:605] resyncing PV controller
E0622 05:48:12.354451       1 resource_quota_controller.go:414] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0622 05:48:12.354518       1 resource_quota_controller.go:429] no resource updates from discovery, skipping resource quota sync
W0622 05:48:12.818885       1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0622 05:48:12.997616       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (6.957935ms)
I0622 05:48:14.691943       1 disruption.go:438] updatePod called on pod "calico-kube-controllers-57cb778775-4dhcs"
I0622 05:48:14.691992       1 disruption.go:444] updatePod "calico-kube-controllers-57cb778775-4dhcs" -> PDB "calico-kube-controllers"
I0622 05:48:14.692403       1 disruption.go:569] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (278.505µs)
I0622 05:48:14.692331       1 replica_set.go:457] Pod calico-kube-controllers-57cb778775-4dhcs updated, objectMeta {Name:calico-kube-controllers-57cb778775-4dhcs GenerateName:calico-kube-controllers-57cb778775- Namespace:kube-system SelfLink: UID:c97e10b3-6d4a-4709-9afc-30746004b4d9 ResourceVersion:621 Generation:0 CreationTimestamp:2022-06-22 05:45:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:57cb778775] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-57cb778775 UID:8d672ef8-70cb-4bd9-a4b7-d434b1f31746 Controller:0xc002326b17 BlockOwnerDeletion:0xc002326b18}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d672ef8-70cb-4bd9-a4b7-d434b1f31746\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 05:47:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-kube-controllers-57cb778775-4dhcs GenerateName:calico-kube-controllers-57cb778775- Namespace:kube-system SelfLink: UID:c97e10b3-6d4a-4709-9afc-30746004b4d9 ResourceVersion:700 Generation:0 CreationTimestamp:2022-06-22 05:45:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:57cb778775] Annotations:map[cni.projectcalico.org/containerID:0144994ef4da59e43b745418a1d5b9e1f344c1b03857dee4522af0cfef7c28a4 cni.projectcalico.org/podIP:192.168.151.130/32 cni.projectcalico.org/podIPs:192.168.151.130/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-57cb778775 UID:8d672ef8-70cb-4bd9-a4b7-d434b1f31746 Controller:0xc001e9622e BlockOwnerDeletion:0xc001e9622f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d672ef8-70cb-4bd9-a4b7-d434b1f31746\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 05:47:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-06-22 05:48:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0622 05:48:14.692732       1 controller_utils.go:206] Controller kube-system/calico-kube-controllers-57cb778775 either never recorded expectations, or the ttl expired.
... skipping 129 lines ...
I0622 05:48:42.121449       1 gc_controller.go:214] GC'ing orphaned
I0622 05:48:42.121480       1 gc_controller.go:277] GC'ing unscheduled pods which are terminating.
I0622 05:48:42.140041       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0622 05:48:42.145291       1 pv_controller_base.go:605] resyncing PV controller
E0622 05:48:42.372199       1 resource_quota_controller.go:414] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0622 05:48:42.372456       1 resource_quota_controller.go:429] no resource updates from discovery, skipping resource quota sync
W0622 05:48:42.843801       1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0622 05:48:48.207809       1 disruption.go:438] updatePod called on pod "metrics-server-74557696d7-mw5bz"
I0622 05:48:48.208141       1 disruption.go:501] No PodDisruptionBudgets found for pod metrics-server-74557696d7-mw5bz, PodDisruptionBudget controller will avoid syncing.
I0622 05:48:48.208206       1 disruption.go:441] No matching pdb for pod "metrics-server-74557696d7-mw5bz"
I0622 05:48:48.208412       1 replica_set.go:457] Pod metrics-server-74557696d7-mw5bz updated, objectMeta {Name:metrics-server-74557696d7-mw5bz GenerateName:metrics-server-74557696d7- Namespace:kube-system SelfLink: UID:50dbf004-e043-47d3-809a-8d21abf4fac3 ResourceVersion:752 Generation:0 CreationTimestamp:2022-06-22 05:45:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:74557696d7] Annotations:map[cni.projectcalico.org/containerID:eecd425e200942d09a37cfc3c35dfeb4315c21a0c90f9262d5d90a39473d8669 cni.projectcalico.org/podIP:192.168.151.132/32 cni.projectcalico.org/podIPs:192.168.151.132/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-74557696d7 UID:5ea8100a-8a07-444b-b9b7-7bf5401a2121 Controller:0xc0023265e7 BlockOwnerDeletion:0xc0023265e8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ea8100a-8a07-444b-b9b7-7bf5401a2121\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-06-22 05:48:16 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 05:48:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.151.132\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-74557696d7-mw5bz GenerateName:metrics-server-74557696d7- Namespace:kube-system SelfLink: UID:50dbf004-e043-47d3-809a-8d21abf4fac3 ResourceVersion:786 Generation:0 CreationTimestamp:2022-06-22 05:45:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:74557696d7] Annotations:map[cni.projectcalico.org/containerID:eecd425e200942d09a37cfc3c35dfeb4315c21a0c90f9262d5d90a39473d8669 cni.projectcalico.org/podIP:192.168.151.132/32 cni.projectcalico.org/podIPs:192.168.151.132/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-74557696d7 UID:5ea8100a-8a07-444b-b9b7-7bf5401a2121 Controller:0xc001f532c7 BlockOwnerDeletion:0xc001f532c8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ea8100a-8a07-444b-b9b7-7bf5401a2121\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 05:45:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-06-22 05:48:16 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 05:48:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.151.132\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0622 05:48:48.209010       1 endpoints_controller.go:528] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0
I0622 05:48:48.209187       1 controller_utils.go:206] Controller kube-system/metrics-server-74557696d7 either never recorded expectations, or the ttl expired.
... skipping 80 lines ...
I0622 05:49:04.222355       1 certificate_controller.go:81] Updating certificate request csr-brh8r
I0622 05:49:04.222401       1 certificate_controller.go:167] Finished syncing certificate request "csr-brh8r" (1.9µs)
I0622 05:49:04.222242       1 certificate_controller.go:167] Finished syncing certificate request "csr-brh8r" (2.9µs)
I0622 05:49:04.223612       1 certificate_controller.go:167] Finished syncing certificate request "csr-brh8r" (8.191287ms)
I0622 05:49:04.223800       1 certificate_controller.go:167] Finished syncing certificate request "csr-brh8r" (2.8µs)
I0622 05:49:04.368314       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-w2swq"
W0622 05:49:04.368596       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-hly3cw-md-0-w2swq" does not exist
I0622 05:49:04.368866       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-hly3cw-md-0-w2swq}
I0622 05:49:04.369056       1 taint_manager.go:451] "Updating known taints on node" node="capz-hly3cw-md-0-w2swq" taints=[]
I0622 05:49:04.369317       1 topologycache.go:179] Ignoring node capz-hly3cw-control-plane-hwctj because it has an excluded label
I0622 05:49:04.369499       1 topologycache.go:183] Ignoring node capz-hly3cw-md-0-w2swq because it is not ready: [{MemoryPressure False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0622 05:49:04.369788       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0622 05:49:04.372302       1 controller.go:697] Ignoring node capz-hly3cw-md-0-w2swq with Ready condition status False
I0622 05:49:04.373537       1 controller.go:272] Triggering nodeSync
I0622 05:49:04.373562       1 controller.go:291] nodeSync has been triggered
I0622 05:49:04.373573       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0622 05:49:04.373584       1 controller.go:808] Finished updateLoadBalancerHosts
... skipping 137 lines ...
I0622 05:49:12.140917       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0622 05:49:12.147166       1 pv_controller_base.go:605] resyncing PV controller
I0622 05:49:12.389443       1 resource_quota_controller.go:429] no resource updates from discovery, skipping resource quota sync
I0622 05:49:14.267532       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-hly3cw-md-0-lwqgl}
I0622 05:49:14.274432       1 taint_manager.go:451] "Updating known taints on node" node="capz-hly3cw-md-0-lwqgl" taints=[]
I0622 05:49:14.269241       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-lwqgl"
W0622 05:49:14.274812       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-hly3cw-md-0-lwqgl" does not exist
I0622 05:49:14.273527       1 controller.go:697] Ignoring node capz-hly3cw-md-0-lwqgl with Ready condition status False
I0622 05:49:14.275110       1 controller.go:697] Ignoring node capz-hly3cw-md-0-w2swq with Ready condition status False
I0622 05:49:14.275263       1 controller.go:272] Triggering nodeSync
I0622 05:49:14.275414       1 controller.go:291] nodeSync has been triggered
I0622 05:49:14.275564       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0622 05:49:14.275705       1 controller.go:808] Finished updateLoadBalancerHosts
I0622 05:49:14.275857       1 controller.go:735] It took 0.000293207 seconds to finish nodeSyncInternal
I0622 05:49:14.273557       1 topologycache.go:179] Ignoring node capz-hly3cw-control-plane-hwctj because it has an excluded label
I0622 05:49:14.276192       1 topologycache.go:183] Ignoring node capz-hly3cw-md-0-w2swq because it is not ready: [{MemoryPressure False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 05:49:04 +0000 UTC 2022-06-22 05:49:04 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0622 05:49:14.276449       1 topologycache.go:183] Ignoring node capz-hly3cw-md-0-lwqgl because it is not ready: [{MemoryPressure False 2022-06-22 05:49:14 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 05:49:14 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 05:49:14 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 05:49:14 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-hly3cw-md-0-lwqgl" not found]}]
I0622 05:49:14.276531       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0622 05:49:14.274372       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4c934c2025250, ext:174359464431, loc:(*time.Location)(0x6f111e0)}}
I0622 05:49:14.276811       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4c936907fa893, ext:181602559738, loc:(*time.Location)(0x6f111e0)}}
I0622 05:49:14.276840       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-hly3cw-md-0-lwqgl], creating 1
I0622 05:49:14.282226       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4c9348e1994ba, ext:173562315153, loc:(*time.Location)(0x6f111e0)}}
I0622 05:49:14.282443       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4c93690d59c37, ext:181608192670, loc:(*time.Location)(0x6f111e0)}}
... skipping 267 lines ...
I0622 05:49:32.739012       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4c93b2c0c5798, ext:200064763915, loc:(*time.Location)(0x6f111e0)}}
I0622 05:49:32.739032       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0622 05:49:32.739061       1 daemon_controller.go:1036] Pods to delete for daemon set kube-proxy: [], deleting 0
I0622 05:49:32.739087       1 daemon_controller.go:1119] Updating daemon set status
I0622 05:49:32.739266       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/kube-proxy" (1.614239ms)
I0622 05:49:34.593653       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-lwqgl"
I0622 05:49:35.231725       1 topologycache.go:183] Ignoring node capz-hly3cw-md-0-lwqgl because it is not ready: [{MemoryPressure False 2022-06-22 05:49:34 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 05:49:34 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 05:49:34 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 05:49:34 +0000 UTC 2022-06-22 05:49:14 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0622 05:49:35.232155       1 topologycache.go:179] Ignoring node capz-hly3cw-control-plane-hwctj because it has an excluded label
I0622 05:49:35.232533       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0622 05:49:35.232842       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-hly3cw-md-0-w2swq"
I0622 05:49:35.233278       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-w2swq"
I0622 05:49:35.232866       1 controller.go:697] Ignoring node capz-hly3cw-md-0-lwqgl with Ready condition status False
I0622 05:49:35.233674       1 controller.go:265] Node changes detected, triggering a full node sync on all loadbalancer services
... skipping 7 lines ...
I0622 05:49:35.252581       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-hly3cw-md-0-w2swq" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0622 05:49:35.253351       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-w2swq"
I0622 05:49:36.482786       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hly3cw/providers/Microsoft.Compute/virtualMachines/capz-hly3cw-md-0-lwqgl), assuming it is managed by availability set: not a vmss instance
I0622 05:49:36.482908       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hly3cw/providers/Microsoft.Compute/virtualMachines/capz-hly3cw-md-0-lwqgl), assuming it is managed by availability set: not a vmss instance
I0622 05:49:36.482941       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-hly3cw-md-0-lwqgl"
I0622 05:49:36.482958       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-hly3cw-md-0-lwqgl"
I0622 05:49:37.164202       1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-hly3cw-md-0-w2swq transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 05:49:14 +0000 UTC,LastTransitionTime:2022-06-22 05:49:04 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 05:49:35 +0000 UTC,LastTransitionTime:2022-06-22 05:49:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0622 05:49:37.164292       1 node_lifecycle_controller.go:1052] Node capz-hly3cw-md-0-w2swq ReadyCondition updated. Updating timestamp.
I0622 05:49:37.180490       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-hly3cw-md-0-w2swq}
I0622 05:49:37.180517       1 taint_manager.go:451] "Updating known taints on node" node="capz-hly3cw-md-0-w2swq" taints=[]
I0622 05:49:37.180709       1 taint_manager.go:472] "All taints were removed from the node. Cancelling all evictions..." node="capz-hly3cw-md-0-w2swq"
I0622 05:49:37.180902       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-w2swq"
I0622 05:49:37.181495       1 node_lifecycle_controller.go:898] Node capz-hly3cw-md-0-w2swq is healthy again, removing all taints
... skipping 149 lines ...
I0622 05:49:55.172582       1 controller.go:764] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0622 05:49:55.172602       1 controller.go:735] It took 0.000251406 seconds to finish nodeSyncInternal
I0622 05:49:55.186489       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-lwqgl"
I0622 05:49:55.187441       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-hly3cw-md-0-lwqgl" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0622 05:49:57.113473       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0622 05:49:57.148807       1 pv_controller_base.go:605] resyncing PV controller
I0622 05:49:57.185346       1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-hly3cw-md-0-lwqgl transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 05:49:34 +0000 UTC,LastTransitionTime:2022-06-22 05:49:14 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 05:49:55 +0000 UTC,LastTransitionTime:2022-06-22 05:49:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0622 05:49:57.185422       1 node_lifecycle_controller.go:1052] Node capz-hly3cw-md-0-lwqgl ReadyCondition updated. Updating timestamp.
I0622 05:49:57.199033       1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-hly3cw-md-0-lwqgl}
I0622 05:49:57.199065       1 taint_manager.go:451] "Updating known taints on node" node="capz-hly3cw-md-0-lwqgl" taints=[]
I0622 05:49:57.199085       1 taint_manager.go:472] "All taints were removed from the node. Cancelling all evictions..." node="capz-hly3cw-md-0-lwqgl"
I0622 05:49:57.199187       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hly3cw-md-0-lwqgl"
I0622 05:49:57.206241       1 node_lifecycle_controller.go:898] Node capz-hly3cw-md-0-lwqgl is healthy again, removing all taints
... skipping 34 lines ...
I0622 05:50:03.226893       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-8565959cf4" need=2 creating=2
I0622 05:50:03.227407       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/csi-azurefile-controller-8565959cf4"
I0622 05:50:03.227841       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-8565959cf4 to 2"
I0622 05:50:03.239857       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0622 05:50:03.240221       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-06-22 05:50:03.227080625 +0000 UTC m=+230.552838280 - now: 2022-06-22 05:50:03.240212931 +0000 UTC m=+230.565970686]
I0622 05:50:03.245688       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="28.159157ms"
I0622 05:50:03.245788       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0622 05:50:03.245991       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-06-22 05:50:03.245974965 +0000 UTC m=+230.571732920"
I0622 05:50:03.246979       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-06-22 05:50:03 +0000 UTC - now: 2022-06-22 05:50:03.246971388 +0000 UTC m=+230.572728943]
I0622 05:50:03.255372       1 controller_utils.go:581] Controller csi-azurefile-controller-8565959cf4 created pod csi-azurefile-controller-8565959cf4-pb5gn
I0622 05:50:03.255614       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-8565959cf4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-8565959cf4-pb5gn"
I0622 05:50:03.258339       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-pb5gn"
I0622 05:50:03.258556       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-8565959cf4-pb5gn" podUID=93921c92-da08-4226-ba49-1ffbdb7a027d
I0622 05:50:03.258599       1 disruption.go:426] addPod called on pod "csi-azurefile-controller-8565959cf4-pb5gn"
I0622 05:50:03.259026       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-pb5gn, PodDisruptionBudget controller will avoid syncing.
I0622 05:50:03.258378       1 replica_set.go:394] Pod csi-azurefile-controller-8565959cf4-pb5gn created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-8565959cf4-pb5gn", GenerateName:"csi-azurefile-controller-8565959cf4-", Namespace:"kube-system", SelfLink:"", UID:"93921c92-da08-4226-ba49-1ffbdb7a027d", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 5, 50, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"8565959cf4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-8565959cf4", UID:"e5ce3374-8584-4549-95a1-5a57639c2857", Controller:(*bool)(0xc000c47d97), BlockOwnerDeletion:(*bool)(0xc000c47d98)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 5, 50, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0013fc930), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0013fc948), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0013fc960), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-8f6ms", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00035c9a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8f6ms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8f6ms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8f6ms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8f6ms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8f6ms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00035cac0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-8f6ms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001fc2d40), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e961f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001f93b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e96250)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e96270)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001e96278), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e9627c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002108470), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 05:50:03.259221       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4c942cd831bb4, ext:230552454271, loc:(*time.Location)(0x6f111e0)}}
I0622 05:50:03.259184       1 disruption.go:429] No matching pdb for pod "csi-azurefile-controller-8565959cf4-pb5gn"
I0622 05:50:03.262107       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0622 05:50:03.265655       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="19.663259ms"
I0622 05:50:03.266041       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-06-22 05:50:03.266017233 +0000 UTC m=+230.591775188"
I0622 05:50:03.266974       1 disruption.go:438] updatePod called on pod "csi-azurefile-controller-8565959cf4-pb5gn"
... skipping 6 lines ...
I0622 05:50:03.271449       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="5.416526ms"
I0622 05:50:03.278298       1 controller_utils.go:581] Controller csi-azurefile-controller-8565959cf4 created pod csi-azurefile-controller-8565959cf4-pgcl4
I0622 05:50:03.278386       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-8565959cf4, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0622 05:50:03.278729       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-8565959cf4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-8565959cf4-pgcl4"
I0622 05:50:03.278976       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-8565959cf4-pgcl4" podUID=ec95c1fd-e039-425d-83c0-ee9c7df72ed9
I0622 05:50:03.279243       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-pgcl4"
I0622 05:50:03.278949       1 replica_set.go:394] Pod csi-azurefile-controller-8565959cf4-pgcl4 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-8565959cf4-pgcl4", GenerateName:"csi-azurefile-controller-8565959cf4-", Namespace:"kube-system", SelfLink:"", UID:"ec95c1fd-e039-425d-83c0-ee9c7df72ed9", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 5, 50, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"8565959cf4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-8565959cf4", UID:"e5ce3374-8584-4549-95a1-5a57639c2857", Controller:(*bool)(0xc001d4dcd7), BlockOwnerDeletion:(*bool)(0xc001d4dcd8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 5, 50, 3, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0013fd488), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0013fd4a0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0013fd4b8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-j6dkw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00035d1a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j6dkw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j6dkw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j6dkw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j6dkw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j6dkw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00035d2e0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j6dkw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0020c9380), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fe06b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001f9f80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fe0710)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fe0730)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001fe0738), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fe073c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002109d80), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 05:50:03.279441       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4c942cd831bb4, ext:230552454271, loc:(*time.Location)(0x6f111e0)}}
I0622 05:50:03.279785       1 disruption.go:426] addPod called on pod "csi-azurefile-controller-8565959cf4-pgcl4"
I0622 05:50:03.280033       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-pgcl4, PodDisruptionBudget controller will avoid syncing.
I0622 05:50:03.280223       1 disruption.go:429] No matching pdb for pod "csi-azurefile-controller-8565959cf4-pgcl4"
I0622 05:50:03.293880       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-azurefile-controller-8565959cf4" (67.464673ms)
I0622 05:50:03.296258       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4c942cd831bb4, ext:230552454271, loc:(*time.Location)(0x6f111e0)}}
... skipping 213 lines ...
I0622 05:50:13.557052       1 replica_set.go:394] Pod csi-snapshot-controller-789545b454-wzvms created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-snapshot-controller-789545b454-wzvms", GenerateName:"csi-snapshot-controller-789545b454-", Namespace:"kube-system", SelfLink:"", UID:"e81d6057-da6e-4281-ad1e-6e6b84deb01c", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 5, 50, 13, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-snapshot-controller", "pod-template-hash":"789545b454"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-snapshot-controller-789545b454", UID:"6eaedb36-623c-46eb-9978-6d381f02018a", Controller:(*bool)(0xc0006fa3b7), BlockOwnerDeletion:(*bool)(0xc0006fa3b8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 5, 50, 13, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0007680f0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-5rz55", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0002fd640), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-snapshot-controller", Image:"mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1", Command:[]string(nil), Args:[]string{"--v=2", "--leader-election=true", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5rz55", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006fa468), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-snapshot-controller-sa", DeprecatedServiceAccount:"csi-snapshot-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004e23f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006fa4d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006fa4f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0006fa4f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006fa4fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001c71e50), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 05:50:13.557282       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-snapshot-controller-789545b454", timestamp:time.Time{wall:0xc0a4c9455f990474, ext:240855879799, loc:(*time.Location)(0x6f111e0)}}
I0622 05:50:13.567404       1 controller_utils.go:581] Controller csi-snapshot-controller-789545b454 created pod csi-snapshot-controller-789545b454-wzvms
I0622 05:50:13.567471       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-789545b454, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0622 05:50:13.567856       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-789545b454" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-789545b454-wzvms"
I0622 05:50:13.578027       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="55.724938ms"
I0622 05:50:13.578062       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0622 05:50:13.578111       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-06-22 05:50:13.578093395 +0000 UTC m=+240.903851150"
I0622 05:50:13.578514       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-06-22 05:50:13 +0000 UTC - now: 2022-06-22 05:50:13.578503405 +0000 UTC m=+240.904261160]
I0622 05:50:13.579026       1 disruption.go:438] updatePod called on pod "csi-snapshot-controller-789545b454-wzvms"
I0622 05:50:13.579062       1 disruption.go:501] No PodDisruptionBudgets found for pod csi-snapshot-controller-789545b454-wzvms, PodDisruptionBudget controller will avoid syncing.
I0622 05:50:13.579072       1 disruption.go:441] No matching pdb for pod "csi-snapshot-controller-789545b454-wzvms"
I0622 05:50:13.579161       1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-snapshot-controller-789545b454-wzvms"
... skipping 1480 lines ...
I0622 05:55:19.796307       1 disruption.go:429] No matching pdb for pod "azurefile-volume-tester-nkhcr-575cd99d79-vjrnz"
I0622 05:55:19.796378       1 replica_set_utils.go:59] Updating status for : azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0622 05:55:19.795474       1 taint_manager.go:411] "Noticed pod update" pod="azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79-vjrnz"
I0622 05:55:19.795518       1 replica_set.go:394] Pod azurefile-volume-tester-nkhcr-575cd99d79-vjrnz created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azurefile-volume-tester-nkhcr-575cd99d79-vjrnz", GenerateName:"azurefile-volume-tester-nkhcr-575cd99d79-", Namespace:"azurefile-5356", SelfLink:"", UID:"42975cef-9b6e-4148-a741-096d1d1eb944", ResourceVersion:"2231", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 5, 55, 19, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azurefile-volume-tester-5018949295715050020", "pod-template-hash":"575cd99d79"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azurefile-volume-tester-nkhcr-575cd99d79", UID:"062e414c-e306-4ef3-8f1b-02d79d72f058", Controller:(*bool)(0xc00098fb97), BlockOwnerDeletion:(*bool)(0xc00098fb98)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 5, 55, 19, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00167ac00), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc00167ac48), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-7j7qp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00147c680), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 100; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7j7qp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00098fc88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00043b650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00098fce0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00098fd00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00098fd08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00098fd0c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001801130), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0622 05:55:19.796572       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79", timestamp:time.Time{wall:0xc0a4c991eea5f773, ext:547108386679, loc:(*time.Location)(0x6f111e0)}}
I0622 05:55:19.795954       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="23.974967ms"
I0622 05:55:19.796778       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-nkhcr\": the object has been modified; please apply your changes to the latest version and try again"
I0622 05:55:19.796961       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" startTime="2022-06-22 05:55:19.796942762 +0000 UTC m=+547.122700417"
I0622 05:55:19.797347       1 event.go:294] "Event occurred" object="azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azurefile-volume-tester-nkhcr-575cd99d79-vjrnz"
I0622 05:55:19.797415       1 deployment_util.go:774] Deployment "azurefile-volume-tester-nkhcr" timed out (false) [last progress check: 2022-06-22 05:55:19 +0000 UTC - now: 2022-06-22 05:55:19.797407873 +0000 UTC m=+547.123165428]
I0622 05:55:19.799176       1 disruption.go:438] updatePod called on pod "azurefile-volume-tester-nkhcr-575cd99d79-vjrnz"
I0622 05:55:19.800826       1 disruption.go:501] No PodDisruptionBudgets found for pod azurefile-volume-tester-nkhcr-575cd99d79-vjrnz, PodDisruptionBudget controller will avoid syncing.
I0622 05:55:19.801018       1 disruption.go:441] No matching pdb for pod "azurefile-volume-tester-nkhcr-575cd99d79-vjrnz"
... skipping 14 lines ...
I0622 05:55:19.828225       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79", timestamp:time.Time{wall:0xc0a4c991eea5f773, ext:547108386679, loc:(*time.Location)(0x6f111e0)}}
I0622 05:55:19.828482       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79" (258.606µs)
I0622 05:55:19.827484       1 disruption.go:501] No PodDisruptionBudgets found for pod azurefile-volume-tester-nkhcr-575cd99d79-vjrnz, PodDisruptionBudget controller will avoid syncing.
I0622 05:55:19.828789       1 disruption.go:441] No matching pdb for pod "azurefile-volume-tester-nkhcr-575cd99d79-vjrnz"
I0622 05:55:19.828153       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr"
I0622 05:55:19.831306       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="4.952917ms"
I0622 05:55:19.831478       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-nkhcr\": the object has been modified; please apply your changes to the latest version and try again"
I0622 05:55:19.831631       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" startTime="2022-06-22 05:55:19.831615282 +0000 UTC m=+547.157372937"
I0622 05:55:19.835444       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr"
I0622 05:55:19.835717       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="4.087097ms"
I0622 05:55:19.835765       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" startTime="2022-06-22 05:55:19.83574908 +0000 UTC m=+547.161507035"
I0622 05:55:19.836059       1 deployment_util.go:774] Deployment "azurefile-volume-tester-nkhcr" timed out (false) [last progress check: 2022-06-22 05:55:19 +0000 UTC - now: 2022-06-22 05:55:19.836049887 +0000 UTC m=+547.161807742]
I0622 05:55:19.836097       1 progress.go:195] Queueing up deployment "azurefile-volume-tester-nkhcr" for a progress check after 599s
... skipping 88 lines ...
I0622 05:55:24.847201       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="12.768099ms"
I0622 05:55:24.847510       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" startTime="2022-06-22 05:55:24.847490367 +0000 UTC m=+552.173248122"
I0622 05:55:24.848209       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr"
I0622 05:55:24.848401       1 controller_utils.go:938] Ignoring inactive pod azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79-vjrnz in state Running, deletion time 2022-06-22 05:55:54 +0000 UTC
I0622 05:55:24.848591       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-5356/azurefile-volume-tester-nkhcr-575cd99d79" (1.868043ms)
I0622 05:55:24.851154       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="3.649085ms"
I0622 05:55:24.851186       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-nkhcr\": the object has been modified; please apply your changes to the latest version and try again"
I0622 05:55:24.851230       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" startTime="2022-06-22 05:55:24.851215154 +0000 UTC m=+552.176973109"
I0622 05:55:24.854651       1 deployment_controller.go:183] "Updating deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr"
I0622 05:55:24.855042       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="3.816489ms"
I0622 05:55:24.855219       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" startTime="2022-06-22 05:55:24.855072744 +0000 UTC m=+552.180942801"
I0622 05:55:24.855528       1 progress.go:195] Queueing up deployment "azurefile-volume-tester-nkhcr" for a progress check after 596s
I0622 05:55:24.855565       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-nkhcr" duration="368.609µs"
... skipping 1363 lines ...

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380

Ran 6 of 34 Specs in 360.231 seconds
FAIL! -- 5 Passed | 1 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 5 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

--- FAIL: TestE2E (360.24s)
FAIL
FAIL	sigs.k8s.io/azurefile-csi-driver/test/e2e	360.295s
FAIL
make: *** [Makefile:85: e2e-test] Error 1
NAME                              STATUS   ROLES           AGE     VERSION                             INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
capz-hly3cw-control-plane-hwctj   Ready    control-plane   12m     v1.25.0-alpha.1.67+a3dc67c38b3609   10.0.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-hly3cw-md-0-lwqgl            Ready    <none>          9m22s   v1.25.0-alpha.1.67+a3dc67c38b3609   10.1.0.5      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-hly3cw-md-0-w2swq            Ready    <none>          9m32s   v1.25.0-alpha.1.67+a3dc67c38b3609   10.1.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
NAMESPACE     NAME                                                      READY   STATUS    RESTARTS      AGE     IP                NODE                              NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-57cb778775-4dhcs                  1/1     Running   0             12m     192.168.151.130   capz-hly3cw-control-plane-hwctj   <none>           <none>
... skipping 116 lines ...