PR | yingchunliu-zte: unmountVolumes check shouldPodRuntimeBeRemoved |
Result | FAILURE |
Tests | 1 failed / 5 succeeded |
Started | |
Elapsed | 48m11s |
Revision | c1f77b354e7291936d44bc81a71a05c46b5cc08c |
Refs |
110682 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureFile\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\sshould\screate\sa\svolume\son\sdemand\sand\sresize\sit\s\[kubernetes\.io\/azure\-file\]\s\[file\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:356 Jun 22 01:15:36.914: newPVCSize(11Gi) is not equal to newPVSize(10GiGi) /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380from junit_01.xml
�[1mSTEP�[0m: Creating a kubernetes client Jun 22 01:14:58.188: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename azurefile �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace Jun 22 01:14:59.390: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig �[1mSTEP�[0m: setting up the StorageClass �[1mSTEP�[0m: creating a StorageClass �[1mSTEP�[0m: setting up the PVC and PV �[1mSTEP�[0m: creating a PVC �[1mSTEP�[0m: waiting for PVC to be in phase "Bound" Jun 22 01:14:59.609: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qwrsk] to have phase Bound Jun 22 01:14:59.716: INFO: PersistentVolumeClaim pvc-qwrsk found but phase is Pending instead of Bound. Jun 22 01:15:01.825: INFO: PersistentVolumeClaim pvc-qwrsk found and phase=Bound (2.21539577s) �[1mSTEP�[0m: checking the PVC �[1mSTEP�[0m: validating provisioned PV �[1mSTEP�[0m: checking the PV �[1mSTEP�[0m: deploying the pod �[1mSTEP�[0m: checking that the pods command exits with no error Jun 22 01:15:02.148: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-5qrq8" in namespace "azurefile-2546" to be "Succeeded or Failed" Jun 22 01:15:02.254: INFO: Pod "azurefile-volume-tester-5qrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.584161ms Jun 22 01:15:04.368: INFO: Pod "azurefile-volume-tester-5qrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220428163s Jun 22 01:15:06.482: INFO: Pod "azurefile-volume-tester-5qrq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334088937s �[1mSTEP�[0m: Saw pod success Jun 22 01:15:06.482: INFO: Pod "azurefile-volume-tester-5qrq8" satisfied condition "Succeeded or Failed" �[1mSTEP�[0m: resizing the pvc �[1mSTEP�[0m: sleep 30s waiting for resize complete �[1mSTEP�[0m: checking the resizing result �[1mSTEP�[0m: checking the resizing PV result Jun 22 01:15:36.914: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi) Full Stack Trace sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10() /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?) /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f testing.tRunner(0xc000003380, 0x21350e8) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f Jun 22 01:15:36.915: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-5qrq8" Jun 22 01:15:37.037: INFO: Pod azurefile-volume-tester-5qrq8 has the following logs: hello world �[1mSTEP�[0m: Deleting pod azurefile-volume-tester-5qrq8 in namespace azurefile-2546 Jun 22 01:15:37.157: INFO: deleting PVC "azurefile-2546"/"pvc-qwrsk" Jun 22 01:15:37.157: INFO: Deleting PersistentVolumeClaim "pvc-qwrsk" �[1mSTEP�[0m: waiting for claim's PV "pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90" to be deleted Jun 22 01:15:37.265: INFO: Waiting up to 10m0s for PersistentVolume pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90 to get deleted Jun 22 01:15:37.372: INFO: PersistentVolume pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90 found and phase=Released (106.54042ms) Jun 22 01:15:42.480: INFO: PersistentVolume pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90 was removed Jun 22 01:15:42.480: INFO: Waiting up to 5m0s for PersistentVolumeClaim azurefile-2546 to be removed Jun 22 01:15:42.588: INFO: Claim "azurefile-2546" in namespace "pvc-qwrsk" doesn't exist in the system Jun 22 01:15:42.588: INFO: deleting StorageClass azurefile-2546-kubernetes.io-azure-file-dynamic-sc-h8dl2 �[1mSTEP�[0m: Collecting events from namespace "azurefile-2546". �[1mSTEP�[0m: Found 10 events. Jun 22 01:15:42.822: INFO: At 2022-06-22 01:14:59 +0000 UTC - event for pvc-qwrsk: {file.csi.azure.com_capz-1o072a-md-0-p7hvd_8da51ffa-0a33-40df-b1ea-d985a3097c07 } ProvisioningSucceeded: Successfully provisioned volume pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90 Jun 22 01:15:42.822: INFO: At 2022-06-22 01:14:59 +0000 UTC - event for pvc-qwrsk: {file.csi.azure.com_capz-1o072a-md-0-p7hvd_8da51ffa-0a33-40df-b1ea-d985a3097c07 } Provisioning: External provisioner is provisioning volume for claim "azurefile-2546/pvc-qwrsk" Jun 22 01:15:42.822: INFO: At 2022-06-22 01:14:59 +0000 UTC - event for pvc-qwrsk: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "file.csi.azure.com" or manually created by system administrator Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:02 +0000 UTC - event for azurefile-volume-tester-5qrq8: {default-scheduler } Scheduled: Successfully assigned azurefile-2546/azurefile-volume-tester-5qrq8 to capz-1o072a-md-0-p7hvd Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:03 +0000 UTC - event for azurefile-volume-tester-5qrq8: {kubelet capz-1o072a-md-0-p7hvd} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:03 +0000 UTC - event for azurefile-volume-tester-5qrq8: {kubelet capz-1o072a-md-0-p7hvd} Created: Created container volume-tester Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:03 +0000 UTC - event for azurefile-volume-tester-5qrq8: {kubelet capz-1o072a-md-0-p7hvd} Started: Started container volume-tester Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:06 +0000 UTC - event for pvc-qwrsk: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:06 +0000 UTC - event for pvc-qwrsk: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90 Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:06 +0000 UTC - event for pvc-qwrsk: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-1o072a#f30bd20a360e14b518c7eed#pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90#pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90#azurefile-2546) is not supported on ControllerExpandVolume Jun 22 01:15:42.929: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 01:15:42.929: INFO: Jun 22 01:15:43.081: INFO: Logging node info for node capz-1o072a-control-plane-gqmjd Jun 22 01:15:43.204: INFO: Node Info: &Node{ObjectMeta:{capz-1o072a-control-plane-gqmjd 02c29def-cea8-408c-b3b2-5562786ea331 2406 0 2022-06-22 01:03:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:westeurope-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-1o072a-control-plane-gqmjd kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:westeurope-1] map[cluster.x-k8s.io/cluster-name:capz-1o072a cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-1o072a-control-plane-l5k97 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-1o072a-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-1o072a-control-plane-gqmjd"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.193.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-22 01:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-22 01:03:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-22 01:05:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 01:06:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-22 01:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 01:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-control-plane-gqmjd,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 01:06:11 +0000 UTC,LastTransitionTime:2022-06-22 01:06:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:06:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-1o072a-control-plane-gqmjd,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0d467cb4f1d94392be1ccd419ac88507,SystemUUID:11ec0eca-1b27-e845-8384-1a13a57c4872,BootID:2f826ed0-b380-44ff-b812-1978c13ca84f,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+9e320e27222c5b,KubeProxyVersion:v1.25.0-alpha.1.67+9e320e27222c5b,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:39501122,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:e09b43e2783b4187389c42b7a16ede578a3473b61ea4e289e7c331ef04894e4a capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:39499245,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:33779242,},ContainerImage{Names:[capzci.azurecr.io/kube-apiserver@sha256:a9901512756a5e342dbf1c2430257ca5c55782644430d8430537167358688928 capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:33777548,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:31010102,},ContainerImage{Names:[capzci.azurecr.io/kube-controller-manager@sha256:1c570ad57702bb95cbbd40f0c6fd6cb85e274de8b1b5ed50e216d273681f1ad4 capzci.azurecr.io/kube-controller-manager:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:31009186,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:15533653,},ContainerImage{Names:[capzci.azurecr.io/kube-scheduler@sha256:d63464391d58c58aa2d55cbce0ced8155129d6d1be497f0e424d0913fdcb40eb capzci.azurecr.io/kube-scheduler:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:15531817,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 22 01:15:43.205: INFO: Logging kubelet events for node capz-1o072a-control-plane-gqmjd Jun 22 01:15:43.315: INFO: Logging pods the kubelet thinks is on node capz-1o072a-control-plane-gqmjd Jun 22 01:15:43.580: INFO: kube-apiserver-capz-1o072a-control-plane-gqmjd started at 2022-06-22 01:05:06 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container kube-apiserver ready: true, restart count 0 Jun 22 01:15:43.580: INFO: calico-node-kdjgd started at 2022-06-22 01:05:42 +0000 UTC (2+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Init container upgrade-ipam ready: true, restart count 0 Jun 22 01:15:43.580: INFO: Init container install-cni ready: true, restart count 0 Jun 22 01:15:43.580: INFO: Container calico-node ready: true, restart count 0 Jun 22 01:15:43.580: INFO: coredns-8c797478b-cxvsd started at 2022-06-22 01:06:08 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container coredns ready: true, restart count 0 Jun 22 01:15:43.580: INFO: calico-kube-controllers-57cb778775-fxsp7 started at 2022-06-22 01:06:08 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container calico-kube-controllers ready: true, restart count 0 Jun 22 01:15:43.580: INFO: etcd-capz-1o072a-control-plane-gqmjd started at 2022-06-22 01:03:27 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container etcd ready: true, restart count 0 Jun 22 01:15:43.580: INFO: kube-scheduler-capz-1o072a-control-plane-gqmjd started at 2022-06-22 01:05:06 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container kube-scheduler ready: true, restart count 0 Jun 22 01:15:43.580: INFO: kube-proxy-56mx5 started at 2022-06-22 01:05:42 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 01:15:43.580: INFO: coredns-8c797478b-84fqs started at 2022-06-22 01:06:08 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container coredns ready: true, restart count 0 Jun 22 01:15:43.580: INFO: metrics-server-74557696d7-q4qz8 started at 2022-06-22 01:06:08 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container metrics-server ready: true, restart count 0 Jun 22 01:15:43.580: INFO: csi-azurefile-node-548cs started at 2022-06-22 01:07:41 +0000 UTC (0+3 container statuses recorded) Jun 22 01:15:43.580: INFO: Container azurefile ready: true, restart count 0 Jun 22 01:15:43.580: INFO: Container liveness-probe ready: true, restart count 0 Jun 22 01:15:43.580: INFO: Container node-driver-registrar ready: true, restart count 0 Jun 22 01:15:43.580: INFO: kube-controller-manager-capz-1o072a-control-plane-gqmjd started at 2022-06-22 01:03:37 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:43.580: INFO: Container kube-controller-manager ready: true, restart count 0 Jun 22 01:15:43.987: INFO: Latency metrics for node capz-1o072a-control-plane-gqmjd Jun 22 01:15:43.987: INFO: Logging node info for node capz-1o072a-md-0-nqg6m Jun 22 01:15:44.098: INFO: Node Info: &Node{ObjectMeta:{capz-1o072a-md-0-nqg6m efe389b9-faf8-4c7b-983c-4292e812a014 1924 0 2022-06-22 01:07:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-1o072a-md-0-nqg6m kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-1o072a cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-1o072a-md-0-5b4584d5bd-j5758 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-1o072a-md-0-5b4584d5bd csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-1o072a-md-0-nqg6m"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.240.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-22 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-06-22 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2022-06-22 01:07:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 01:07:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2022-06-22 01:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 01:11:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-md-0-nqg6m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 01:07:43 +0000 UTC,LastTransitionTime:2022-06-22 01:07:43 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 01:11:25 +0000 UTC,LastTransitionTime:2022-06-22 01:07:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 01:11:25 +0000 UTC,LastTransitionTime:2022-06-22 01:07:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 01:11:25 +0000 UTC,LastTransitionTime:2022-06-22 01:07:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:11:25 +0000 UTC,LastTransitionTime:2022-06-22 01:07:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-1o072a-md-0-nqg6m,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:576b6e761b8440559bdfae57213e2aef,SystemUUID:607b237b-9b69-854f-8b24-3124d0000a3f,BootID:95e4c3b5-0f5d-440e-a441-cbcbc3fad279,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+9e320e27222c5b,KubeProxyVersion:v1.25.0-alpha.1.67+9e320e27222c5b,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:429c8476e3acac27b06ff8054fd983c8c5cfd928b84346239517f29efda41874 mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1],SizeBytes:58142722,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:544e74bd67c649fd49500e195ff4a4ee675cfd26768574262dc6fa0250373d59 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0],SizeBytes:57519578,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:7e5af2ed16e053822e58f6576423c0bb77e59050c3698986f319d257b4551023 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0],SizeBytes:56936934,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:e09b43e2783b4187389c42b7a16ede578a3473b61ea4e289e7c331ef04894e4a capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:39499245,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:8c3fc3c2667004ad6bbdf723bb64c5da66a5cb8b11d4ee59b67179b686223b13 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1],SizeBytes:21074719,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 22 01:15:44.098: INFO: Logging kubelet events for node capz-1o072a-md-0-nqg6m Jun 22 01:15:44.208: INFO: Logging pods the kubelet thinks is on node capz-1o072a-md-0-nqg6m Jun 22 01:15:44.370: INFO: kube-proxy-vtcx4 started at 2022-06-22 01:07:07 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:44.370: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 01:15:44.370: INFO: calico-node-m9bgc started at 2022-06-22 01:07:07 +0000 UTC (2+1 container statuses recorded) Jun 22 01:15:44.370: INFO: Init container upgrade-ipam ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Init container install-cni ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container calico-node ready: true, restart count 0 Jun 22 01:15:44.370: INFO: csi-azurefile-controller-8565959cf4-29dnb started at 2022-06-22 01:07:39 +0000 UTC (0+6 container statuses recorded) Jun 22 01:15:44.370: INFO: Container azurefile ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container csi-attacher ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container csi-provisioner ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container csi-resizer ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container csi-snapshotter ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container liveness-probe ready: true, restart count 0 Jun 22 01:15:44.370: INFO: csi-azurefile-node-w5l6l started at 2022-06-22 01:07:41 +0000 UTC (0+3 container statuses recorded) Jun 22 01:15:44.370: INFO: Container azurefile ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container liveness-probe ready: true, restart count 0 Jun 22 01:15:44.370: INFO: Container node-driver-registrar ready: true, restart count 0 Jun 22 01:15:44.370: INFO: csi-snapshot-controller-789545b454-ntrcx started at 2022-06-22 01:07:49 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:44.370: INFO: Container csi-snapshot-controller ready: true, restart count 0 Jun 22 01:15:44.787: INFO: Latency metrics for node capz-1o072a-md-0-nqg6m Jun 22 01:15:44.788: INFO: Logging node info for node capz-1o072a-md-0-p7hvd Jun 22 01:15:44.898: INFO: Node Info: &Node{ObjectMeta:{capz-1o072a-md-0-p7hvd fb212061-a6e9-4532-b822-35ba090c8f64 1913 0 2022-06-22 01:06:56 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-1o072a-md-0-p7hvd kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-1o072a cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-1o072a-md-0-5b4584d5bd-zrmxw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-1o072a-md-0-5b4584d5bd csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-1o072a-md-0-p7hvd"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.221.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-22 01:06:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-06-22 01:06:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2022-06-22 01:07:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 01:07:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2022-06-22 01:07:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-md-0-p7hvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 01:07:37 +0000 UTC,LastTransitionTime:2022-06-22 01:07:37 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 01:11:21 +0000 UTC,LastTransitionTime:2022-06-22 01:06:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 01:11:21 +0000 UTC,LastTransitionTime:2022-06-22 01:06:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 01:11:21 +0000 UTC,LastTransitionTime:2022-06-22 01:06:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:11:21 +0000 UTC,LastTransitionTime:2022-06-22 01:07:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-1o072a-md-0-p7hvd,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b9c92fb4f0974331930ac40227ef2735,SystemUUID:64cabd41-ec7d-4b48-868e-53ad7bf3a55b,BootID:b3db6291-3f23-40da-84f9-77888df609de,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+9e320e27222c5b,KubeProxyVersion:v1.25.0-alpha.1.67+9e320e27222c5b,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:429c8476e3acac27b06ff8054fd983c8c5cfd928b84346239517f29efda41874 mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1],SizeBytes:58142722,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:544e74bd67c649fd49500e195ff4a4ee675cfd26768574262dc6fa0250373d59 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0],SizeBytes:57519578,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:7e5af2ed16e053822e58f6576423c0bb77e59050c3698986f319d257b4551023 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0],SizeBytes:56936934,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:e09b43e2783b4187389c42b7a16ede578a3473b61ea4e289e7c331ef04894e4a capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:39499245,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:8c3fc3c2667004ad6bbdf723bb64c5da66a5cb8b11d4ee59b67179b686223b13 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1],SizeBytes:21074719,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 22 01:15:44.899: INFO: Logging kubelet events for node capz-1o072a-md-0-p7hvd Jun 22 01:15:45.010: INFO: Logging pods the kubelet thinks is on node capz-1o072a-md-0-p7hvd Jun 22 01:15:45.156: INFO: calico-node-zp9b2 started at 2022-06-22 01:07:03 +0000 UTC (2+1 container statuses recorded) Jun 22 01:15:45.156: INFO: Init container upgrade-ipam ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Init container install-cni ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container calico-node ready: true, restart count 0 Jun 22 01:15:45.156: INFO: csi-azurefile-controller-8565959cf4-rr8hh started at 2022-06-22 01:07:39 +0000 UTC (0+6 container statuses recorded) Jun 22 01:15:45.156: INFO: Container azurefile ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container csi-attacher ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container csi-provisioner ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container csi-resizer ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container csi-snapshotter ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container liveness-probe ready: true, restart count 0 Jun 22 01:15:45.156: INFO: csi-azurefile-node-8qgl2 started at 2022-06-22 01:07:41 +0000 UTC (0+3 container statuses recorded) Jun 22 01:15:45.156: INFO: Container azurefile ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container liveness-probe ready: true, restart count 0 Jun 22 01:15:45.156: INFO: Container node-driver-registrar ready: true, restart count 0 Jun 22 01:15:45.156: INFO: csi-snapshot-controller-789545b454-mgcjf started at 2022-06-22 01:07:49 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:45.156: INFO: Container csi-snapshot-controller ready: true, restart count 0 Jun 22 01:15:45.156: INFO: kube-proxy-tc9lj started at 2022-06-22 01:07:03 +0000 UTC (0+1 container statuses recorded) Jun 22 01:15:45.156: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 01:15:45.561: INFO: Latency metrics for node capz-1o072a-md-0-p7hvd Jun 22 01:15:45.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "azurefile-2546" for this suite.
Filter through log files | View test history on testgrid
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning [env] should retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a NFS volume on demand on a storage account with private endpoint [file.csi.azure.com] [nfs]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a NFS volume on demand with mount options [file.csi.azure.com] [nfs]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a deployment object, write and read to it, delete the pod and write and read to it again [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod with multiple NFS volumes [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod with multiple volumes [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod with volume mount subpath [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod, write and read to it, take a volume snapshot, and validate whether it is ready to use [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a statefulset object, write and read to it, delete the pod and write and read to it again [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a storage account with tags [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a vhd disk volume on demand [kubernetes.io/azure-file] [file.csi.azure.com][disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a vhd disk volume on demand and mount it as readOnly in a pod [file.csi.azure.com][disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume after driver restart [kubernetes.io/azure-file] [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand with mount options (Bring Your Own Key) [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand with useDataPlaneAPI [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create an CSI inline volume [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create an inline volume by in-tree driver [kubernetes.io/azure-file]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [file.csi.azure.com][disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should delete PV with reclaimPolicy "Delete" [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should mount on-prem smb server [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should receive FailedMount event with invalid mount options [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should receive FailedMount event with invalid mount options [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use a pre-provisioned volume and mount it by multiple pods [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use existing credentials in k8s cluster [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use provided credentials [file.csi.azure.com] [Windows]
... skipping 81 lines ... /home/prow/go/src/k8s.io/kubernetes /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 154 100 154 0 0 5703 0 --:--:-- --:--:-- --:--:-- 5703 100 33 100 33 0 0 383 0 --:--:-- --:--:-- --:--:-- 383 Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_9e320e27222c5b not found: manifest unknown: manifest tagged by "v1.25.0-alpha.1.67_9e320e27222c5b" is not found Building Kubernetes make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0622 00:31:59] Verifying Prerequisites.... +++ [0622 00:31:59] Building Docker image kube-build:build-8729218f14-5-v1.25.0-go1.18.3-bullseye.0 +++ [0622 00:34:42] Creating data container kube-build-data-8729218f14-5-v1.25.0-go1.18.3-bullseye.0 +++ [0622 00:34:56] Syncing sources to container ... skipping 744 lines ... certificate.cert-manager.io "selfsigned-cert" deleted # Create secret for AzureClusterIdentity ./hack/create-identity-secret.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Error from server (NotFound): secrets "cluster-identity-secret" not found secret/cluster-identity-secret created secret/cluster-identity-secret labeled # Deploy CAPI curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f - namespace/capi-system created customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created ... skipping 211 lines ... [0mPre-Provisioned[0m [1mshould use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows][0m [37m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77[0m [1mSTEP[0m: Creating a kubernetes client Jun 22 01:10:13.564: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename azurefile Jun 22 01:10:14.283: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace 2022/06/22 01:10:14 Check driver pods if restarts ... check the driver pods if restarts ... ====================================================================================== 2022/06/22 01:10:15 Check successfully ... skipping 180 lines ... Jun 22 01:10:47.375: INFO: PersistentVolumeClaim pvc-nc55l found but phase is Pending instead of Bound. Jun 22 01:10:49.482: INFO: PersistentVolumeClaim pvc-nc55l found and phase=Bound (25.404451443s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with no error Jun 22 01:10:49.807: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-q5bsz" in namespace "azurefile-5194" to be "Succeeded or Failed" Jun 22 01:10:49.914: INFO: Pod "azurefile-volume-tester-q5bsz": Phase="Pending", Reason="", readiness=false. Elapsed: 106.572616ms Jun 22 01:10:52.028: INFO: Pod "azurefile-volume-tester-q5bsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22084123s Jun 22 01:10:54.154: INFO: Pod "azurefile-volume-tester-q5bsz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346671273s Jun 22 01:10:56.268: INFO: Pod "azurefile-volume-tester-q5bsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.460579421s [1mSTEP[0m: Saw pod success Jun 22 01:10:56.268: INFO: Pod "azurefile-volume-tester-q5bsz" satisfied condition "Succeeded or Failed" Jun 22 01:10:56.268: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-q5bsz" Jun 22 01:10:56.396: INFO: Pod azurefile-volume-tester-q5bsz has the following logs: hello world [1mSTEP[0m: Deleting pod azurefile-volume-tester-q5bsz in namespace azurefile-5194 Jun 22 01:10:56.529: INFO: deleting PVC "azurefile-5194"/"pvc-nc55l" Jun 22 01:10:56.529: INFO: Deleting PersistentVolumeClaim "pvc-nc55l" ... skipping 156 lines ... Jun 22 01:12:52.736: INFO: PersistentVolumeClaim pvc-562vf found but phase is Pending instead of Bound. Jun 22 01:12:54.844: INFO: PersistentVolumeClaim pvc-562vf found and phase=Bound (23.299617469s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with an error Jun 22 01:12:55.175: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-85z79" in namespace "azurefile-156" to be "Error status code" Jun 22 01:12:55.282: INFO: Pod "azurefile-volume-tester-85z79": Phase="Pending", Reason="", readiness=false. Elapsed: 107.009379ms Jun 22 01:12:57.396: INFO: Pod "azurefile-volume-tester-85z79": Phase="Running", Reason="", readiness=true. Elapsed: 2.22085095s Jun 22 01:12:59.509: INFO: Pod "azurefile-volume-tester-85z79": Phase="Running", Reason="", readiness=false. Elapsed: 4.333834187s Jun 22 01:13:01.623: INFO: Pod "azurefile-volume-tester-85z79": Phase="Failed", Reason="", readiness=false. Elapsed: 6.447743238s [1mSTEP[0m: Saw pod failure Jun 22 01:13:01.623: INFO: Pod "azurefile-volume-tester-85z79" satisfied condition "Error status code" [1mSTEP[0m: checking that pod logs contain expected message Jun 22 01:13:01.733: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-85z79" Jun 22 01:13:01.843: INFO: Pod azurefile-volume-tester-85z79 has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP[0m: Deleting pod azurefile-volume-tester-85z79 in namespace azurefile-156 Jun 22 01:13:01.965: INFO: deleting PVC "azurefile-156"/"pvc-562vf" ... skipping 179 lines ... Jun 22 01:14:59.716: INFO: PersistentVolumeClaim pvc-qwrsk found but phase is Pending instead of Bound. Jun 22 01:15:01.825: INFO: PersistentVolumeClaim pvc-qwrsk found and phase=Bound (2.21539577s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with no error Jun 22 01:15:02.148: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-5qrq8" in namespace "azurefile-2546" to be "Succeeded or Failed" Jun 22 01:15:02.254: INFO: Pod "azurefile-volume-tester-5qrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.584161ms Jun 22 01:15:04.368: INFO: Pod "azurefile-volume-tester-5qrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220428163s Jun 22 01:15:06.482: INFO: Pod "azurefile-volume-tester-5qrq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334088937s [1mSTEP[0m: Saw pod success Jun 22 01:15:06.482: INFO: Pod "azurefile-volume-tester-5qrq8" satisfied condition "Succeeded or Failed" [1mSTEP[0m: resizing the pvc [1mSTEP[0m: sleep 30s waiting for resize complete [1mSTEP[0m: checking the resizing result [1mSTEP[0m: checking the resizing PV result Jun 22 01:15:36.914: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi) Full Stack Trace sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10() /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?) /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f ... skipping 22 lines ... Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:02 +0000 UTC - event for azurefile-volume-tester-5qrq8: {default-scheduler } Scheduled: Successfully assigned azurefile-2546/azurefile-volume-tester-5qrq8 to capz-1o072a-md-0-p7hvd Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:03 +0000 UTC - event for azurefile-volume-tester-5qrq8: {kubelet capz-1o072a-md-0-p7hvd} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:03 +0000 UTC - event for azurefile-volume-tester-5qrq8: {kubelet capz-1o072a-md-0-p7hvd} Created: Created container volume-tester Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:03 +0000 UTC - event for azurefile-volume-tester-5qrq8: {kubelet capz-1o072a-md-0-p7hvd} Started: Started container volume-tester Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:06 +0000 UTC - event for pvc-qwrsk: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:06 +0000 UTC - event for pvc-qwrsk: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90 Jun 22 01:15:42.822: INFO: At 2022-06-22 01:15:06 +0000 UTC - event for pvc-qwrsk: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-1o072a#f30bd20a360e14b518c7eed#pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90#pvc-1eb73b8e-d0a4-4c2f-a3a5-084b0bb2bf90#azurefile-2546) is not supported on ControllerExpandVolume Jun 22 01:15:42.929: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 01:15:42.929: INFO: Jun 22 01:15:43.081: INFO: Logging node info for node capz-1o072a-control-plane-gqmjd Jun 22 01:15:43.204: INFO: Node Info: &Node{ObjectMeta:{capz-1o072a-control-plane-gqmjd 02c29def-cea8-408c-b3b2-5562786ea331 2406 0 2022-06-22 01:03:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:westeurope-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-1o072a-control-plane-gqmjd kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:westeurope-1] map[cluster.x-k8s.io/cluster-name:capz-1o072a cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-1o072a-control-plane-l5k97 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-1o072a-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-1o072a-control-plane-gqmjd"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.193.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-22 01:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-22 01:03:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-22 01:05:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 01:06:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-22 01:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 01:13:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-control-plane-gqmjd,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 01:06:11 +0000 UTC,LastTransitionTime:2022-06-22 01:06:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:13:38 +0000 UTC,LastTransitionTime:2022-06-22 01:06:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-1o072a-control-plane-gqmjd,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0d467cb4f1d94392be1ccd419ac88507,SystemUUID:11ec0eca-1b27-e845-8384-1a13a57c4872,BootID:2f826ed0-b380-44ff-b812-1978c13ca84f,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+9e320e27222c5b,KubeProxyVersion:v1.25.0-alpha.1.67+9e320e27222c5b,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:39501122,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:e09b43e2783b4187389c42b7a16ede578a3473b61ea4e289e7c331ef04894e4a capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:39499245,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:33779242,},ContainerImage{Names:[capzci.azurecr.io/kube-apiserver@sha256:a9901512756a5e342dbf1c2430257ca5c55782644430d8430537167358688928 capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:33777548,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:31010102,},ContainerImage{Names:[capzci.azurecr.io/kube-controller-manager@sha256:1c570ad57702bb95cbbd40f0c6fd6cb85e274de8b1b5ed50e216d273681f1ad4 capzci.azurecr.io/kube-controller-manager:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:31009186,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.63_4720f0725c3dad k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.63_4720f0725c3dad],SizeBytes:15533653,},ContainerImage{Names:[capzci.azurecr.io/kube-scheduler@sha256:d63464391d58c58aa2d55cbce0ced8155129d6d1be497f0e424d0913fdcb40eb capzci.azurecr.io/kube-scheduler:v1.25.0-alpha.1.67_9e320e27222c5b],SizeBytes:15531817,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 22 01:15:43.205: INFO: ... skipping 805 lines ... I0622 01:03:45.061162 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-06-22 00:55:38 +0000 UTC to 2032-06-19 01:00:38 +0000 UTC (now=2022-06-22 01:03:45.061140733 +0000 UTC))" I0622 01:03:45.061355 1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1655859824\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1655859824\" (2022-06-22 00:03:44 +0000 UTC to 2023-06-22 00:03:44 +0000 UTC (now=2022-06-22 01:03:45.061311435 +0000 UTC))" I0622 01:03:45.061521 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1655859825\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1655859824\" (2022-06-22 00:03:44 +0000 UTC to 2023-06-22 00:03:44 +0000 UTC (now=2022-06-22 01:03:45.061493837 +0000 UTC))" I0622 01:03:45.061551 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257 I0622 01:03:45.061713 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt" I0622 01:03:45.061809 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager... E0622 01:03:45.062324 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:03:45.062348 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:03:45.062380 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" I0622 01:03:45.061839 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E0622 01:03:47.296756 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:03:47.296803 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:03:50.274415 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:03:50.274457 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:03:52.599710 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:03:52.599755 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:03:55.634610 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:03:55.634647 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:03:58.025915 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="133.802µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39440" resp=200 E0622 01:03:58.027205 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:03:58.027241 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:01.694941 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:01.695058 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:04.682469 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:04.682512 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:04:08.017373 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="97.301µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39540" resp=200 E0622 01:04:08.065599 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:08.065642 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:11.809197 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:11.809264 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:15.616639 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:15.616683 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:04:18.018083 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="141.101µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39580" resp=200 E0622 01:04:19.184091 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:19.184135 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:22.642830 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:22.642875 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:25.845074 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:25.845107 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:04:28.018400 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="112.802µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39634" resp=200 E0622 01:04:29.788118 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:29.788161 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:33.685289 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:33.685359 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:36.930186 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:36.930229 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:04:38.016387 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="92.001µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39680" resp=200 E0622 01:04:41.257149 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:41.257192 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:44.409865 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:44.409910 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:47.423469 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:47.423519 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:04:48.017103 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="100.901µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39712" resp=200 E0622 01:04:50.951377 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:50.951419 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:54.529849 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 10.0.0.4:6443: connect: connection refused I0622 01:04:54.529915 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager E0622 01:04:57.841601 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system" I0622 01:04:57.841893 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:04:58.017002 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="86.001µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39944" resp=200 I0622 01:04:59.920894 1 leaderelection.go:352] lock is held by capz-1o072a-control-plane-gqmjd_4c098a97-c1c1-4f34-a4a6-cb227ad8c7ae and has not yet expired I0622 01:04:59.920920 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:05:04.014982 1 leaderelection.go:352] lock is held by capz-1o072a-control-plane-gqmjd_4c098a97-c1c1-4f34-a4a6-cb227ad8c7ae and has not yet expired I0622 01:05:04.015009 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:05:08.017593 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="264.001µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:39978" resp=200 I0622 01:05:08.068612 1 leaderelection.go:352] lock is held by capz-1o072a-control-plane-gqmjd_4c098a97-c1c1-4f34-a4a6-cb227ad8c7ae and has not yet expired I0622 01:05:08.068642 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:05:10.316575 1 leaderelection.go:352] lock is held by capz-1o072a-control-plane-gqmjd_4c098a97-c1c1-4f34-a4a6-cb227ad8c7ae and has not yet expired I0622 01:05:10.316601 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:05:13.241636 1 leaderelection.go:352] lock is held by capz-1o072a-control-plane-gqmjd_4c098a97-c1c1-4f34-a4a6-cb227ad8c7ae and has not yet expired I0622 01:05:13.241663 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager I0622 01:05:15.415056 1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager I0622 01:05:15.415882 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-1o072a-control-plane-gqmjd_139b1450-533b-4a9d-a7da-9992be983a30 became leader" I0622 01:05:15.538059 1 request.go:533] Waited for 79.69365ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/metrics.k8s.io/v1beta1 I0622 01:05:15.587877 1 request.go:533] Waited for 129.440868ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apps/v1 I0622 01:05:15.637850 1 request.go:533] Waited for 179.424988ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/events.k8s.io/v1 I0622 01:05:15.687469 1 request.go:533] Waited for 229.021405ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/authentication.k8s.io/v1 ... skipping 56 lines ... I0622 01:05:16.741470 1 reflector.go:255] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134 I0622 01:05:16.741773 1 reflector.go:219] Starting reflector *v1.Secret (19h55m31.163080276s) from vendor/k8s.io/client-go/informers/factory.go:134 I0622 01:05:16.741785 1 reflector.go:255] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134 I0622 01:05:16.741962 1 shared_informer.go:255] Waiting for caches to sync for tokens I0622 01:05:16.742451 1 reflector.go:219] Starting reflector *v1.Node (19h55m31.163080276s) from vendor/k8s.io/client-go/informers/factory.go:134 I0622 01:05:16.742470 1 reflector.go:255] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134 W0622 01:05:16.761356 1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret I0622 01:05:16.761385 1 controllermanager.go:568] Starting "disruption" I0622 01:05:16.765031 1 controllermanager.go:597] Started "disruption" I0622 01:05:16.765056 1 controllermanager.go:568] Starting "tokencleaner" I0622 01:05:16.765356 1 disruption.go:370] Sending events to api server. I0622 01:05:16.765396 1 disruption.go:380] Starting disruption controller I0622 01:05:16.765406 1 shared_informer.go:255] Waiting for caches to sync for disruption ... skipping 104 lines ... I0622 01:05:16.837362 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/cinder" I0622 01:05:16.837414 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk" I0622 01:05:16.837447 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" I0622 01:05:16.837468 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/flocker" I0622 01:05:16.837520 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" I0622 01:05:16.837546 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/storageos" I0622 01:05:16.837613 1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet I0622 01:05:16.837661 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/csi" I0622 01:05:16.837763 1 controllermanager.go:597] Started "persistentvolume-binder" I0622 01:05:16.837782 1 controllermanager.go:568] Starting "endpoint" I0622 01:05:16.837984 1 pv_controller_base.go:311] Starting persistent volume controller I0622 01:05:16.837999 1 shared_informer.go:255] Waiting for caches to sync for persistent volume I0622 01:05:16.839878 1 controllermanager.go:597] Started "endpoint" ... skipping 129 lines ... I0622 01:05:17.495351 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume" I0622 01:05:17.495415 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" I0622 01:05:17.495454 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/rbd" I0622 01:05:17.495478 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/storageos" I0622 01:05:17.495511 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/fc" I0622 01:05:17.495530 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" I0622 01:05:17.495554 1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet I0622 01:05:17.495586 1 plugins.go:637] "Loaded volume plugin" pluginName="kubernetes.io/csi" I0622 01:05:17.495855 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-control-plane-gqmjd" W0622 01:05:17.495896 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-1o072a-control-plane-gqmjd" does not exist I0622 01:05:17.495963 1 controllermanager.go:597] Started "attachdetach" I0622 01:05:17.496015 1 attach_detach_controller.go:328] Starting attach detach controller I0622 01:05:17.496029 1 shared_informer.go:255] Waiting for caches to sync for attach detach I0622 01:05:17.496084 1 controllermanager.go:568] Starting "replicationcontroller" I0622 01:05:17.544132 1 controllermanager.go:597] Started "replicationcontroller" I0622 01:05:17.544159 1 controllermanager.go:568] Starting "garbagecollector" ... skipping 324 lines ... I0622 01:05:17.953633 1 endpoints_controller.go:369] Finished syncing service "kube-system/kube-dns" endpoints. (57.8µs) I0622 01:05:17.953705 1 endpoints_controller.go:369] Finished syncing service "kube-system/metrics-server" endpoints. (33.4µs) I0622 01:05:17.953729 1 endpoints_controller.go:369] Finished syncing service "default/kubernetes" endpoints. (2.8µs) I0622 01:05:17.953970 1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (168.101µs) I0622 01:05:17.954131 1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (83.601µs) I0622 01:05:17.954179 1 endpointslice_controller.go:319] Finished syncing service "default/kubernetes" endpoint slices. (2.4µs) W0622 01:05:17.954654 1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0622 01:05:17.955051 1 garbagecollector.go:223] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments], removed: [] I0622 01:05:17.955073 1 garbagecollector.go:229] reset restmapper I0622 01:05:17.962610 1 shared_informer.go:285] caches populated I0622 01:05:17.962769 1 shared_informer.go:262] Caches are synced for resource quota I0622 01:05:17.962980 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage I0622 01:05:17.992716 1 request.go:533] Waited for 193.558501ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller ... skipping 257 lines ... I0622 01:05:18.345649 1 deployment_util.go:774] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-06-22 01:05:18.320407114 +0000 UTC m=+94.007101160 - now: 2022-06-22 01:05:18.345636735 +0000 UTC m=+94.032330681] I0622 01:05:18.346545 1 disruption.go:438] updatePod called on pod "kube-scheduler-capz-1o072a-control-plane-gqmjd" I0622 01:05:18.346787 1 disruption.go:501] No PodDisruptionBudgets found for pod kube-scheduler-capz-1o072a-control-plane-gqmjd, PodDisruptionBudget controller will avoid syncing. I0622 01:05:18.346965 1 disruption.go:441] No matching pdb for pod "kube-scheduler-capz-1o072a-control-plane-gqmjd" I0622 01:05:18.348215 1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/calico-kube-controllers" I0622 01:05:18.394236 1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="595.379821ms" I0622 01:05:18.394660 1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again" I0622 01:05:18.395496 1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations I0622 01:05:18.394993 1 reflector.go:436] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata closed with: too old resource version: 561 (564) I0622 01:05:18.396194 1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-06-22 01:05:18.395867875 +0000 UTC m=+94.082561821" I0622 01:05:18.397378 1 deployment_util.go:774] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-06-22 01:05:18 +0000 UTC - now: 2022-06-22 01:05:18.397368482 +0000 UTC m=+94.084062228] I0622 01:05:18.416006 1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations I0622 01:05:18.425690 1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/calico-kube-controllers" ... skipping 223 lines ... I0622 01:05:47.899625 1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-1o072a-control-plane-gqmjd" I0622 01:05:47.899649 1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Succeeded" for node "capz-1o072a-control-plane-gqmjd" I0622 01:05:47.901369 1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync E0622 01:05:47.926099 1 resource_quota_controller.go:414] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0622 01:05:47.926367 1 resource_quota_controller.go:429] no resource updates from discovery, skipping resource quota sync I0622 01:05:48.018079 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="148.602µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:40262" resp=200 W0622 01:05:48.763667 1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0622 01:05:52.803017 1 node_lifecycle_controller.go:874] Node capz-1o072a-control-plane-gqmjd is NotReady as of 2022-06-22 01:05:52.802990518 +0000 UTC m=+128.489684364. Adding it to the Taint queue. I0622 01:05:52.900631 1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-control-plane-gqmjd), assuming it is managed by availability set: not a vmss instance I0622 01:05:52.900714 1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-control-plane-gqmjd), assuming it is managed by availability set: not a vmss instance I0622 01:05:52.900746 1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-1o072a-control-plane-gqmjd" I0622 01:05:52.900767 1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Succeeded" for node "capz-1o072a-control-plane-gqmjd" I0622 01:05:55.409708 1 disruption.go:438] updatePod called on pod "calico-node-kdjgd" ... skipping 156 lines ... I0622 01:06:12.480057 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b8a11c9b4c00, ext:148166633630, loc:(*time.Location)(0x6f121e0)}} I0622 01:06:12.480112 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b8a11c9ddab3, ext:148166800833, loc:(*time.Location)(0x6f121e0)}} I0622 01:06:12.480127 1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0 I0622 01:06:12.480220 1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0 I0622 01:06:12.480247 1 daemon_controller.go:1119] Updating daemon set status I0622 01:06:12.480286 1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (1.423821ms) I0622 01:06:12.806731 1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-1o072a-control-plane-gqmjd transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 01:05:58 +0000 UTC,LastTransitionTime:2022-06-22 01:03:16 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:06:08 +0000 UTC,LastTransitionTime:2022-06-22 01:06:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,} I0622 01:06:12.806855 1 node_lifecycle_controller.go:1052] Node capz-1o072a-control-plane-gqmjd ReadyCondition updated. Updating timestamp. I0622 01:06:12.806900 1 node_lifecycle_controller.go:898] Node capz-1o072a-control-plane-gqmjd is healthy again, removing all taints I0622 01:06:12.806925 1 node_lifecycle_controller.go:1196] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0622 01:06:17.747873 1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync I0622 01:06:17.781007 1 gc_controller.go:214] GC'ing orphaned I0622 01:06:17.781043 1 gc_controller.go:277] GC'ing unscheduled pods which are terminating. I0622 01:06:17.841692 1 pv_controller_base.go:605] resyncing PV controller I0622 01:06:17.901887 1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync E0622 01:06:17.946181 1 resource_quota_controller.go:414] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0622 01:06:17.946274 1 resource_quota_controller.go:429] no resource updates from discovery, skipping resource quota sync I0622 01:06:18.017383 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="132.501µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:40496" resp=200 W0622 01:06:18.791388 1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0622 01:06:20.985175 1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-8exq3s" (26.201µs) I0622 01:06:22.168892 1 disruption.go:438] updatePod called on pod "metrics-server-74557696d7-q4qz8" I0622 01:06:22.168977 1 disruption.go:501] No PodDisruptionBudgets found for pod metrics-server-74557696d7-q4qz8, PodDisruptionBudget controller will avoid syncing. I0622 01:06:22.168989 1 disruption.go:441] No matching pdb for pod "metrics-server-74557696d7-q4qz8" I0622 01:06:22.169164 1 replica_set.go:457] Pod metrics-server-74557696d7-q4qz8 updated, objectMeta {Name:metrics-server-74557696d7-q4qz8 GenerateName:metrics-server-74557696d7- Namespace:kube-system SelfLink: UID:2ab088b4-b2aa-4a66-9f07-f686b38a370e ResourceVersion:678 Generation:0 CreationTimestamp:2022-06-22 01:03:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:74557696d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-74557696d7 UID:91943589-8e9f-4d0b-a885-3df32df3e61e Controller:0xc000b2df0e BlockOwnerDeletion:0xc000b2df0f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91943589-8e9f-4d0b-a885-3df32df3e61e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 01:06:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-74557696d7-q4qz8 GenerateName:metrics-server-74557696d7- Namespace:kube-system SelfLink: UID:2ab088b4-b2aa-4a66-9f07-f686b38a370e ResourceVersion:724 Generation:0 CreationTimestamp:2022-06-22 01:03:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:74557696d7] Annotations:map[cni.projectcalico.org/containerID:b8f1acd45875ec6e0ca16b89680c717877fbd34121ef9437855b2ffa36b285f4 cni.projectcalico.org/podIP:192.168.193.193/32 cni.projectcalico.org/podIPs:192.168.193.193/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-74557696d7 UID:91943589-8e9f-4d0b-a885-3df32df3e61e Controller:0xc002134fa7 BlockOwnerDeletion:0xc002134fa8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91943589-8e9f-4d0b-a885-3df32df3e61e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 01:06:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-06-22 01:06:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}. I0622 01:06:22.169691 1 controller_utils.go:206] Controller kube-system/metrics-server-74557696d7 either never recorded expectations, or the ttl expired. ... skipping 61 lines ... I0622 01:06:25.602560 1 endpointslicemirroring_controller.go:313] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices I0622 01:06:25.602584 1 endpointslicemirroring_controller.go:275] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (65.501µs) I0622 01:06:25.613703 1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (74.429038ms) I0622 01:06:25.613774 1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/coredns-8c797478b" I0622 01:06:25.613817 1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-06-22 01:06:25.613796086 +0000 UTC m=+161.300489932" I0622 01:06:25.614332 1 endpoints_controller.go:369] Finished syncing service "kube-system/kube-dns" endpoints. (13.079095ms) I0622 01:06:25.614357 1 endpoints_controller.go:356] "Error syncing endpoints, retrying" service="kube-system/kube-dns" err="Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again" I0622 01:06:25.614501 1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-8c797478b" (32.638736ms) I0622 01:06:25.614533 1 controller_utils.go:206] Controller kube-system/coredns-8c797478b either never recorded expectations, or the ttl expired. I0622 01:06:25.614621 1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-8c797478b" (93.801µs) I0622 01:06:25.614769 1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again" I0622 01:06:25.620010 1 endpoints_controller.go:528] Update endpoints for kube-system/kube-dns, ready: 3 not ready: 0 I0622 01:06:25.631551 1 endpointslicemirroring_controller.go:278] syncEndpoints("kube-system/kube-dns") I0622 01:06:25.631575 1 endpointslicemirroring_controller.go:313] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices I0622 01:06:25.631601 1 endpointslicemirroring_controller.go:275] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (69.3µs) I0622 01:06:25.631727 1 endpoints_controller.go:369] Finished syncing service "kube-system/kube-dns" endpoints. (11.883686ms) I0622 01:06:25.631797 1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (18.038431ms) ... skipping 143 lines ... I0622 01:06:47.750012 1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync I0622 01:06:47.843584 1 pv_controller_base.go:605] resyncing PV controller I0622 01:06:47.902945 1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync E0622 01:06:47.958954 1 resource_quota_controller.go:414] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0622 01:06:47.959194 1 resource_quota_controller.go:429] no resource updates from discovery, skipping resource quota sync I0622 01:06:48.018040 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="145.198µs" userAgent="kube-probe/1.25+" audit-ID="" srcIP="127.0.0.1:40824" resp=200 W0622 01:06:48.817604 1 garbagecollector.go:755] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0622 01:06:48.921940 1 endpoints_controller.go:528] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0 I0622 01:06:48.922289 1 replica_set.go:457] Pod metrics-server-74557696d7-q4qz8 updated, objectMeta {Name:metrics-server-74557696d7-q4qz8 GenerateName:metrics-server-74557696d7- Namespace:kube-system SelfLink: UID:2ab088b4-b2aa-4a66-9f07-f686b38a370e ResourceVersion:772 Generation:0 CreationTimestamp:2022-06-22 01:03:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:74557696d7] Annotations:map[cni.projectcalico.org/containerID:b8f1acd45875ec6e0ca16b89680c717877fbd34121ef9437855b2ffa36b285f4 cni.projectcalico.org/podIP:192.168.193.193/32 cni.projectcalico.org/podIPs:192.168.193.193/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-74557696d7 UID:91943589-8e9f-4d0b-a885-3df32df3e61e Controller:0xc00214373e BlockOwnerDeletion:0xc00214373f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91943589-8e9f-4d0b-a885-3df32df3e61e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-06-22 01:06:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 01:06:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.193.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-74557696d7-q4qz8 GenerateName:metrics-server-74557696d7- Namespace:kube-system SelfLink: UID:2ab088b4-b2aa-4a66-9f07-f686b38a370e ResourceVersion:818 Generation:0 CreationTimestamp:2022-06-22 01:03:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:74557696d7] Annotations:map[cni.projectcalico.org/containerID:b8f1acd45875ec6e0ca16b89680c717877fbd34121ef9437855b2ffa36b285f4 cni.projectcalico.org/podIP:192.168.193.193/32 cni.projectcalico.org/podIPs:192.168.193.193/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-74557696d7 UID:91943589-8e9f-4d0b-a885-3df32df3e61e Controller:0xc00275bf77 BlockOwnerDeletion:0xc00275bf78}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91943589-8e9f-4d0b-a885-3df32df3e61e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-22 01:03:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-06-22 01:06:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-22 01:06:48 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.193.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}. I0622 01:06:48.922846 1 controller_utils.go:206] Controller kube-system/metrics-server-74557696d7 either never recorded expectations, or the ttl expired. I0622 01:06:48.922996 1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-74557696d7, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->1, availableReplicas 0->1, sequence No: 1->1 I0622 01:06:48.922546 1 disruption.go:438] updatePod called on pod "metrics-server-74557696d7-q4qz8" I0622 01:06:48.923382 1 disruption.go:501] No PodDisruptionBudgets found for pod metrics-server-74557696d7-q4qz8, PodDisruptionBudget controller will avoid syncing. ... skipping 17 lines ... I0622 01:06:56.337529 1 controller.go:272] Triggering nodeSync I0622 01:06:56.337584 1 controller.go:291] nodeSync has been triggered I0622 01:06:56.337681 1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1) I0622 01:06:56.337785 1 controller.go:808] Finished updateLoadBalancerHosts I0622 01:06:56.337869 1 controller.go:735] It took 0.000189502 seconds to finish nodeSyncInternal I0622 01:06:56.338027 1 topologycache.go:179] Ignoring node capz-1o072a-control-plane-gqmjd because it has an excluded label I0622 01:06:56.338173 1 topologycache.go:183] Ignoring node capz-1o072a-md-0-p7hvd because it is not ready: [{MemoryPressure False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-1o072a-md-0-p7hvd" not found]}] I0622 01:06:56.338356 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0622 01:06:56.339292 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4b89ad7ec9889, ext:123088075359, loc:(*time.Location)(0x6f121e0)}} I0622 01:06:56.339568 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4b8ac143d4122, ext:192026252736, loc:(*time.Location)(0x6f121e0)}} I0622 01:06:56.339832 1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-1o072a-md-0-p7hvd], creating 1 I0622 01:06:56.340269 1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-1o072a-md-0-p7hvd} I0622 01:06:56.340522 1 taint_manager.go:451] "Updating known taints on node" node="capz-1o072a-md-0-p7hvd" taints=[] I0622 01:06:56.340837 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-md-0-p7hvd" W0622 01:06:56.340995 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-1o072a-md-0-p7hvd" does not exist I0622 01:06:56.342614 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b8a3b1f4b3b9, ext:158524814123, loc:(*time.Location)(0x6f121e0)}} I0622 01:06:56.342932 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b8ac14709b5f, ext:192029618473, loc:(*time.Location)(0x6f121e0)}} I0622 01:06:56.343093 1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-1o072a-md-0-p7hvd], creating 1 I0622 01:06:56.366084 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-md-0-p7hvd" I0622 01:06:56.366370 1 disruption.go:426] addPod called on pod "kube-proxy-tc9lj" I0622 01:06:56.367251 1 disruption.go:501] No PodDisruptionBudgets found for pod kube-proxy-tc9lj, PodDisruptionBudget controller will avoid syncing. ... skipping 118 lines ... I0622 01:06:58.030788 1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-1o072a/providers/Microsoft.Compute/virtualMachines/capz-1o072a-md-0-p7hvd), assuming it is managed by availability set: not a vmss instance I0622 01:06:58.030835 1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-1o072a-md-0-p7hvd" I0622 01:06:58.031097 1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-1o072a-md-0-p7hvd" I0622 01:06:59.914806 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-control-plane-gqmjd" I0622 01:07:00.308162 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4b8ac1ae014e2, ext:192137586772, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:00.308493 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-md-0-nqg6m" W0622 01:07:00.308522 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-1o072a-md-0-nqg6m" does not exist I0622 01:07:00.308615 1 topologycache.go:179] Ignoring node capz-1o072a-control-plane-gqmjd because it has an excluded label I0622 01:07:00.308853 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a4b8ad12688f2c, ext:195995536730, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:00.309008 1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-1o072a-md-0-nqg6m], creating 1 I0622 01:07:00.308706 1 topologycache.go:183] Ignoring node capz-1o072a-md-0-p7hvd because it is not ready: [{MemoryPressure False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:06:56 +0000 UTC 2022-06-22 01:06:56 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-1o072a-md-0-p7hvd" not found]}] I0622 01:07:00.309244 1 topologycache.go:183] Ignoring node capz-1o072a-md-0-nqg6m because it is not ready: [{MemoryPressure False 2022-06-22 01:07:00 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:07:00 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:07:00 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:07:00 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-1o072a-md-0-nqg6m" not found]}] I0622 01:07:00.309509 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0622 01:07:00.309784 1 controller.go:697] Ignoring node capz-1o072a-md-0-p7hvd with Ready condition status False I0622 01:07:00.309805 1 controller.go:697] Ignoring node capz-1o072a-md-0-nqg6m with Ready condition status False I0622 01:07:00.309962 1 controller.go:272] Triggering nodeSync I0622 01:07:00.309987 1 controller.go:291] nodeSync has been triggered I0622 01:07:00.310134 1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1) ... skipping 326 lines ... I0622 01:07:27.092273 1 controller.go:697] Ignoring node capz-1o072a-md-0-nqg6m with Ready condition status False I0622 01:07:27.092307 1 controller.go:265] Node changes detected, triggering a full node sync on all loadbalancer services I0622 01:07:27.092317 1 controller.go:272] Triggering nodeSync I0622 01:07:27.092329 1 controller.go:291] nodeSync has been triggered I0622 01:07:27.092339 1 controller.go:757] Syncing backends for all LB services. I0622 01:07:27.092349 1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1) I0622 01:07:27.092900 1 topologycache.go:183] Ignoring node capz-1o072a-md-0-nqg6m because it is not ready: [{MemoryPressure False 2022-06-22 01:07:20 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-06-22 01:07:20 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-06-22 01:07:20 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-06-22 01:07:20 +0000 UTC 2022-06-22 01:07:00 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}] I0622 01:07:27.092948 1 topologycache.go:179] Ignoring node capz-1o072a-control-plane-gqmjd because it has an excluded label I0622 01:07:27.092964 1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true) I0622 01:07:27.093260 1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-1o072a-md-0-p7hvd" I0622 01:07:27.093555 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-md-0-p7hvd" I0622 01:07:27.093598 1 controller.go:808] Finished updateLoadBalancerHosts I0622 01:07:27.093609 1 controller.go:764] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes ... skipping 11 lines ... I0622 01:07:27.737019 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b8b3ebec56de, ext:223423603268, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:27.737105 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0a4b8b3ebef3be5, ext:223423792671, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:27.737126 1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0 I0622 01:07:27.737181 1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0 I0622 01:07:27.737213 1 daemon_controller.go:1119] Updating daemon set status I0622 01:07:27.737288 1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (4.413966ms) I0622 01:07:27.819640 1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-1o072a-md-0-p7hvd transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 01:07:16 +0000 UTC,LastTransitionTime:2022-06-22 01:06:56 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:07:27 +0000 UTC,LastTransitionTime:2022-06-22 01:07:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,} I0622 01:07:27.819988 1 node_lifecycle_controller.go:1052] Node capz-1o072a-md-0-p7hvd ReadyCondition updated. Updating timestamp. I0622 01:07:27.830301 1 node_lifecycle_controller.go:898] Node capz-1o072a-md-0-p7hvd is healthy again, removing all taints I0622 01:07:27.830732 1 node_lifecycle_controller.go:1219] Controller detected that zone westeurope: :0 is now in state Normal. I0622 01:07:27.832751 1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-1o072a-md-0-p7hvd} I0622 01:07:27.832936 1 taint_manager.go:451] "Updating known taints on node" node="capz-1o072a-md-0-p7hvd" taints=[] I0622 01:07:27.833090 1 taint_manager.go:472] "All taints were removed from the node. Cancelling all evictions..." node="capz-1o072a-md-0-p7hvd" ... skipping 40 lines ... I0622 01:07:31.041357 1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=4000) CPU, true) I0622 01:07:31.041106 1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-1o072a-md-0-nqg6m" I0622 01:07:31.041128 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-md-0-nqg6m" I0622 01:07:31.060392 1 controller_utils.go:217] "Made sure that node has no taint" node="capz-1o072a-md-0-nqg6m" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] I0622 01:07:31.062387 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-1o072a-md-0-nqg6m" I0622 01:07:32.753640 1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync I0622 01:07:32.831798 1 node_lifecycle_controller.go:1044] ReadyCondition for Node capz-1o072a-md-0-nqg6m transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-22 01:07:20 +0000 UTC,LastTransitionTime:2022-06-22 01:07:00 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 01:07:31 +0000 UTC,LastTransitionTime:2022-06-22 01:07:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,} I0622 01:07:32.831919 1 node_lifecycle_controller.go:1052] Node capz-1o072a-md-0-nqg6m ReadyCondition updated. Updating timestamp. I0622 01:07:32.844477 1 pv_controller_base.go:605] resyncing PV controller I0622 01:07:32.887034 1 node_lifecycle_controller.go:898] Node capz-1o072a-md-0-nqg6m is healthy again, removing all taints I0622 01:07:32.888418 1 taint_manager.go:446] "Noticed node update" node={nodeName:capz-1o072a-md-0-nqg6m} I0622 01:07:32.888467 1 taint_manager.go:451] "Updating known taints on node" node="capz-1o072a-md-0-nqg6m" taints=[] I0622 01:07:32.888487 1 taint_manager.go:472] "All taints were removed from the node. Cancelling all evictions..." node="capz-1o072a-md-0-nqg6m" ... skipping 56 lines ... I0622 01:07:39.341791 1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-06-22 01:07:39.329608713 +0000 UTC m=+235.016304659 - now: 2022-06-22 01:07:39.341779072 +0000 UTC m=+235.028473118] I0622 01:07:39.352193 1 disruption.go:426] addPod called on pod "csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.352283 1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-29dnb, PodDisruptionBudget controller will avoid syncing. I0622 01:07:39.352296 1 disruption.go:429] No matching pdb for pod "csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.352564 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-8565959cf4-29dnb" podUID=706b3fc3-2124-4ece-b161-13af35c8861a I0622 01:07:39.352769 1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.352401 1 replica_set.go:394] Pod csi-azurefile-controller-8565959cf4-29dnb created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-8565959cf4-29dnb", GenerateName:"csi-azurefile-controller-8565959cf4-", Namespace:"kube-system", SelfLink:"", UID:"706b3fc3-2124-4ece-b161-13af35c8861a", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 1, 7, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"8565959cf4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-8565959cf4", UID:"a1860976-5795-43c9-8b91-fbaff0270287", Controller:(*bool)(0xc00247636e), BlockOwnerDeletion:(*bool)(0xc00247636f)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 1, 7, 39, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001513a40), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001513a58), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001513ad0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-nq78p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001157440), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nq78p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nq78p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nq78p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nq78p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nq78p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001157560)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nq78p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001103900), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002476730), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000708d20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002476790)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024767b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0024767b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024767bc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0019fed10), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}. I0622 01:07:39.353097 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4b8b6d3d9917a, ext:235019719504, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:39.355797 1 controller_utils.go:581] Controller csi-azurefile-controller-8565959cf4 created pod csi-azurefile-controller-8565959cf4-29dnb I0622 01:07:39.356540 1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-8565959cf4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.356862 1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="33.743639ms" I0622 01:07:39.357020 1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again" I0622 01:07:39.357279 1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-06-22 01:07:39.357197873 +0000 UTC m=+235.043891619" I0622 01:07:39.358623 1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-06-22 01:07:39 +0000 UTC - now: 2022-06-22 01:07:39.358612791 +0000 UTC m=+235.045306537] I0622 01:07:39.374010 1 disruption.go:438] updatePod called on pod "csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.375288 1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-29dnb, PodDisruptionBudget controller will avoid syncing. I0622 01:07:39.375515 1 disruption.go:441] No matching pdb for pod "csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.374800 1 replica_set.go:457] Pod csi-azurefile-controller-8565959cf4-29dnb updated, objectMeta {Name:csi-azurefile-controller-8565959cf4-29dnb GenerateName:csi-azurefile-controller-8565959cf4- Namespace:kube-system SelfLink: UID:706b3fc3-2124-4ece-b161-13af35c8861a ResourceVersion:1018 Generation:0 CreationTimestamp:2022-06-22 01:07:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:8565959cf4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-8565959cf4 UID:a1860976-5795-43c9-8b91-fbaff0270287 Controller:0xc00247636e BlockOwnerDeletion:0xc00247636f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:07:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1860976-5795-43c9-8b91-fbaff0270287\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-8565959cf4-29dnb GenerateName:csi-azurefile-controller-8565959cf4- Namespace:kube-system SelfLink: UID:706b3fc3-2124-4ece-b161-13af35c8861a ResourceVersion:1019 Generation:0 CreationTimestamp:2022-06-22 01:07:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:8565959cf4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-8565959cf4 UID:a1860976-5795-43c9-8b91-fbaff0270287 Controller:0xc002477e77 BlockOwnerDeletion:0xc002477e78}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:07:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1860976-5795-43c9-8b91-fbaff0270287\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}. I0622 01:07:39.375017 1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-29dnb" I0622 01:07:39.392827 1 controller_utils.go:581] Controller csi-azurefile-controller-8565959cf4 created pod csi-azurefile-controller-8565959cf4-rr8hh I0622 01:07:39.393299 1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-8565959cf4, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1 I0622 01:07:39.394271 1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-8565959cf4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-8565959cf4-rr8hh" I0622 01:07:39.394678 1 disruption.go:426] addPod called on pod "csi-azurefile-controller-8565959cf4-rr8hh" I0622 01:07:39.397809 1 disruption.go:501] No PodDisruptionBudgets found for pod csi-azurefile-controller-8565959cf4-rr8hh, PodDisruptionBudget controller will avoid syncing. I0622 01:07:39.394766 1 replica_set.go:394] Pod csi-azurefile-controller-8565959cf4-rr8hh created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-8565959cf4-rr8hh", GenerateName:"csi-azurefile-controller-8565959cf4-", Namespace:"kube-system", SelfLink:"", UID:"e4167da9-422b-48c3-9570-f23396a90797", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 1, 7, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"8565959cf4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-8565959cf4", UID:"a1860976-5795-43c9-8b91-fbaff0270287", Controller:(*bool)(0xc000def7ee), BlockOwnerDeletion:(*bool)(0xc000def7ef)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 1, 7, 39, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d35a28), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc000d35a40), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d35a58), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-4xjqt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00066ed80), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4xjqt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4xjqt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4xjqt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4xjqt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4xjqt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00066f9e0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4xjqt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001b3efc0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0004ab310), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003e1030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0004ab3c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0004ab490)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0004ab498), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0004ab49c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001b1ad50), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}. I0622 01:07:39.397386 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-8565959cf4-rr8hh" podUID=e4167da9-422b-48c3-9570-f23396a90797 I0622 01:07:39.397478 1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-azurefile-controller-8565959cf4-rr8hh" I0622 01:07:39.398375 1 disruption.go:429] No matching pdb for pod "csi-azurefile-controller-8565959cf4-rr8hh" I0622 01:07:39.398491 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-8565959cf4", timestamp:time.Time{wall:0xc0a4b8b6d3d9917a, ext:235019719504, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:39.406252 1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller" I0622 01:07:39.406747 1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="49.533745ms" ... skipping 310 lines ... I0622 01:07:49.358239 1 disruption.go:438] updatePod called on pod "csi-snapshot-controller-789545b454-mgcjf" I0622 01:07:49.358290 1 disruption.go:501] No PodDisruptionBudgets found for pod csi-snapshot-controller-789545b454-mgcjf, PodDisruptionBudget controller will avoid syncing. I0622 01:07:49.358301 1 disruption.go:441] No matching pdb for pod "csi-snapshot-controller-789545b454-mgcjf" I0622 01:07:49.358362 1 replica_set.go:457] Pod csi-snapshot-controller-789545b454-mgcjf updated, objectMeta {Name:csi-snapshot-controller-789545b454-mgcjf GenerateName:csi-snapshot-controller-789545b454- Namespace:kube-system SelfLink: UID:118ef398-4222-4350-aa25-864a9b91ebee ResourceVersion:1145 Generation:0 CreationTimestamp:2022-06-22 01:07:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:789545b454] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-789545b454 UID:f82ba6a2-366c-48cf-89b9-c2927c7fae76 Controller:0xc002a41b07 BlockOwnerDeletion:0xc002a41b08}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:07:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f82ba6a2-366c-48cf-89b9-c2927c7fae76\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-789545b454-mgcjf GenerateName:csi-snapshot-controller-789545b454- Namespace:kube-system SelfLink: UID:118ef398-4222-4350-aa25-864a9b91ebee ResourceVersion:1149 Generation:0 CreationTimestamp:2022-06-22 01:07:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:789545b454] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-789545b454 UID:f82ba6a2-366c-48cf-89b9-c2927c7fae76 Controller:0xc002939f9e BlockOwnerDeletion:0xc002939f9f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:07:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f82ba6a2-366c-48cf-89b9-c2927c7fae76\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]}. I0622 01:07:49.358582 1 taint_manager.go:411] "Noticed pod update" pod="kube-system/csi-snapshot-controller-789545b454-mgcjf" I0622 01:07:49.361642 1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="58.424875ms" I0622 01:07:49.361667 1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again" I0622 01:07:49.361712 1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-06-22 01:07:49.361700822 +0000 UTC m=+245.048394468" I0622 01:07:49.362051 1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-06-22 01:07:49 +0000 UTC - now: 2022-06-22 01:07:49.362045227 +0000 UTC m=+245.048738773] I0622 01:07:49.362346 1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-snapshot-controller-789545b454" (52.176892ms) I0622 01:07:49.362380 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-snapshot-controller-789545b454", timestamp:time.Time{wall:0xc0a4b8b9527e5ca4, ext:244996965486, loc:(*time.Location)(0x6f121e0)}} I0622 01:07:49.362435 1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-789545b454, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1 I0622 01:07:49.362804 1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-snapshot-controller-789545b454" ... skipping 1558 lines ... I0622 01:13:11.716582 1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-1563/pvc-tgwkd" I0622 01:13:11.716706 1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-1563/pvc-tgwkd" duration="6.901µs" I0622 01:13:11.716722 1 taint_manager.go:411] "Noticed pod update" pod="azurefile-1563/azurefile-volume-tester-gw95w-f9d659bdd-gzrfg" I0622 01:13:11.716147 1 replica_set.go:394] Pod azurefile-volume-tester-gw95w-f9d659bdd-gzrfg created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azurefile-volume-tester-gw95w-f9d659bdd-gzrfg", GenerateName:"azurefile-volume-tester-gw95w-f9d659bdd-", Namespace:"azurefile-1563", SelfLink:"", UID:"d1712f6e-00c1-4436-b2ae-fc9a6a5e5221", ResourceVersion:"2293", Generation:0, CreationTimestamp:time.Date(2022, time.June, 22, 1, 13, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azurefile-volume-tester-5199948958991797301", "pod-template-hash":"f9d659bdd"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azurefile-volume-tester-gw95w-f9d659bdd", UID:"87102c85-3d67-4583-aeab-290428c77808", Controller:(*bool)(0xc00235a147), BlockOwnerDeletion:(*bool)(0xc00235a148)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 22, 1, 13, 11, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d12fd8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc002d12ff0), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-5l8qm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003127e00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 100; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-5l8qm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00235a218), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00076a3f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00235a250)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00235a270)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00235a278), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00235a27c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002255d10), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}. I0622 01:13:11.717178 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-gw95w-f9d659bdd", timestamp:time.Time{wall:0xc0a4b909e924f72d, ext:567376982475, loc:(*time.Location)(0x6f121e0)}} I0622 01:13:11.718904 1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-gw95w" duration="34.326036ms" I0622 01:13:11.719144 1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-gw95w" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-gw95w\": the object has been modified; please apply your changes to the latest version and try again" I0622 01:13:11.719433 1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-gw95w" startTime="2022-06-22 01:13:11.7194036 +0000 UTC m=+567.406097346" I0622 01:13:11.719921 1 deployment_util.go:774] Deployment "azurefile-volume-tester-gw95w" timed out (false) [last progress check: 2022-06-22 01:13:11 +0000 UTC - now: 2022-06-22 01:13:11.719909306 +0000 UTC m=+567.406603152] I0622 01:13:11.723675 1 disruption.go:438] updatePod called on pod "azurefile-volume-tester-gw95w-f9d659bdd-gzrfg" I0622 01:13:11.723868 1 disruption.go:501] No PodDisruptionBudgets found for pod azurefile-volume-tester-gw95w-f9d659bdd-gzrfg, PodDisruptionBudget controller will avoid syncing. I0622 01:13:11.723998 1 disruption.go:441] No matching pdb for pod "azurefile-volume-tester-gw95w-f9d659bdd-gzrfg" I0622 01:13:11.724206 1 replica_set.go:457] Pod azurefile-volume-tester-gw95w-f9d659bdd-gzrfg updated, objectMeta {Name:azurefile-volume-tester-gw95w-f9d659bdd-gzrfg GenerateName:azurefile-volume-tester-gw95w-f9d659bdd- Namespace:azurefile-1563 SelfLink: UID:d1712f6e-00c1-4436-b2ae-fc9a6a5e5221 ResourceVersion:2293 Generation:0 CreationTimestamp:2022-06-22 01:13:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5199948958991797301 pod-template-hash:f9d659bdd] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-gw95w-f9d659bdd UID:87102c85-3d67-4583-aeab-290428c77808 Controller:0xc00235a147 BlockOwnerDeletion:0xc00235a148}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:13:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87102c85-3d67-4583-aeab-290428c77808\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azurefile-volume-tester-gw95w-f9d659bdd-gzrfg GenerateName:azurefile-volume-tester-gw95w-f9d659bdd- Namespace:azurefile-1563 SelfLink: UID:d1712f6e-00c1-4436-b2ae-fc9a6a5e5221 ResourceVersion:2294 Generation:0 CreationTimestamp:2022-06-22 01:13:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5199948958991797301 pod-template-hash:f9d659bdd] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-gw95w-f9d659bdd UID:87102c85-3d67-4583-aeab-290428c77808 Controller:0xc00235adc0 BlockOwnerDeletion:0xc00235adc1}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-22 01:13:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87102c85-3d67-4583-aeab-290428c77808\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]}. ... skipping 1371 lines ... JUnit report was created: /logs/artifacts/junit_01.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mDynamic Provisioning [0m[91m[1m[It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] [0m [37m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380[0m [1m[91mRan 6 of 34 Specs in 365.001 seconds[0m [1m[91mFAIL![0m -- [32m[1m5 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m28 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 5 lines ... [38;5;9m[1mIf this change will be impactful to you please leave a comment on [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m[0m [1mLearn more at:[0m [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m --- FAIL: TestE2E (365.01s) FAIL FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 365.066s FAIL make: *** [Makefile:85: e2e-test] Error 1 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME capz-1o072a-control-plane-gqmjd Ready control-plane 12m v1.25.0-alpha.1.67+9e320e27222c5b 10.0.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.6.2 capz-1o072a-md-0-nqg6m Ready <none> 9m19s v1.25.0-alpha.1.67+9e320e27222c5b 10.1.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.6.2 capz-1o072a-md-0-p7hvd Ready <none> 9m23s v1.25.0-alpha.1.67+9e320e27222c5b 10.1.0.5 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.6.2 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-57cb778775-fxsp7 1/1 Running 0 11m 192.168.193.194 capz-1o072a-control-plane-gqmjd <none> <none> ... skipping 117 lines ...