This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-07 05:40
Elapsed33m41s
Revision
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 627 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 409 lines ...
Sep  7 05:58:17.041: INFO: PersistentVolumeClaim pvc-v4846 found but phase is Pending instead of Bound.
Sep  7 05:58:19.075: INFO: PersistentVolumeClaim pvc-v4846 found and phase=Bound (28.496209294s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  7 05:58:19.185: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-rwclx" in namespace "azurefile-5194" to be "Succeeded or Failed"
Sep  7 05:58:19.219: INFO: Pod "azurefile-volume-tester-rwclx": Phase="Pending", Reason="", readiness=false. Elapsed: 33.165143ms
Sep  7 05:58:21.252: INFO: Pod "azurefile-volume-tester-rwclx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066621023s
Sep  7 05:58:23.285: INFO: Pod "azurefile-volume-tester-rwclx": Phase="Running", Reason="", readiness=false. Elapsed: 4.100015072s
Sep  7 05:58:25.319: INFO: Pod "azurefile-volume-tester-rwclx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134030633s
STEP: Saw pod success
Sep  7 05:58:25.320: INFO: Pod "azurefile-volume-tester-rwclx" satisfied condition "Succeeded or Failed"
Sep  7 05:58:25.320: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-rwclx"
Sep  7 05:58:25.372: INFO: Pod azurefile-volume-tester-rwclx has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-rwclx in namespace azurefile-5194
Sep  7 05:58:25.411: INFO: deleting PVC "azurefile-5194"/"pvc-v4846"
Sep  7 05:58:25.411: INFO: Deleting PersistentVolumeClaim "pvc-v4846"
... skipping 159 lines ...
Sep  7 06:00:20.511: INFO: PersistentVolumeClaim pvc-b4qw7 found but phase is Pending instead of Bound.
Sep  7 06:00:22.543: INFO: PersistentVolumeClaim pvc-b4qw7 found and phase=Bound (28.498139249s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  7 06:00:22.646: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-fw99v" in namespace "azurefile-156" to be "Error status code"
Sep  7 06:00:22.678: INFO: Pod "azurefile-volume-tester-fw99v": Phase="Pending", Reason="", readiness=false. Elapsed: 31.989536ms
Sep  7 06:00:24.712: INFO: Pod "azurefile-volume-tester-fw99v": Phase="Running", Reason="", readiness=false. Elapsed: 2.065384612s
Sep  7 06:00:26.747: INFO: Pod "azurefile-volume-tester-fw99v": Phase="Failed", Reason="", readiness=false. Elapsed: 4.101233479s
STEP: Saw pod failure
Sep  7 06:00:26.748: INFO: Pod "azurefile-volume-tester-fw99v" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 06:00:26.782: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-fw99v"
Sep  7 06:00:26.816: INFO: Pod azurefile-volume-tester-fw99v has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-fw99v in namespace azurefile-156
Sep  7 06:00:26.857: INFO: deleting PVC "azurefile-156"/"pvc-b4qw7"
... skipping 184 lines ...
Sep  7 06:02:25.998: INFO: PersistentVolumeClaim pvc-cnzvv found but phase is Pending instead of Bound.
Sep  7 06:02:28.029: INFO: PersistentVolumeClaim pvc-cnzvv found and phase=Bound (2.062505883s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  7 06:02:28.128: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-4dkdd" in namespace "azurefile-2546" to be "Succeeded or Failed"
Sep  7 06:02:28.162: INFO: Pod "azurefile-volume-tester-4dkdd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.833799ms
Sep  7 06:02:30.197: INFO: Pod "azurefile-volume-tester-4dkdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068806592s
Sep  7 06:02:32.230: INFO: Pod "azurefile-volume-tester-4dkdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101744866s
STEP: Saw pod success
Sep  7 06:02:32.230: INFO: Pod "azurefile-volume-tester-4dkdd" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  7 06:03:02.984: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-4dkdd"
... skipping 732 lines ...
I0907 05:52:19.434327       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662529938\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662529938\" (2022-09-07 04:52:17 +0000 UTC to 2023-09-07 04:52:17 +0000 UTC (now=2022-09-07 05:52:19.434298974 +0000 UTC))"
I0907 05:52:19.434586       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662529939\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662529939\" (2022-09-07 04:52:18 +0000 UTC to 2023-09-07 04:52:18 +0000 UTC (now=2022-09-07 05:52:19.434558975 +0000 UTC))"
I0907 05:52:19.434632       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0907 05:52:19.434966       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0907 05:52:19.435548       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0907 05:52:19.435896       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0907 05:52:22.527858       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0907 05:52:22.528081       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0907 05:52:25.364152       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0907 05:52:25.364328       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0907 05:52:29.589745       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0907 05:52:29.590918       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-tw80t5-control-plane-rfj2h_a1e0ae6b-7ddf-45d6-8d2c-5dbb0b560d12 became leader"
I0907 05:52:29.839298       1 request.go:533] Waited for 93.382494ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/flowcontrol.apiserver.k8s.io/v1beta2
W0907 05:52:29.841461       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0907 05:52:29.842277       1 azure_auth.go:232] Using AzurePublicCloud environment
I0907 05:52:29.842377       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
... skipping 30 lines ...
I0907 05:52:29.845332       1 reflector.go:255] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0907 05:52:29.845544       1 reflector.go:219] Starting reflector *v1.ServiceAccount (22h14m2.170943067s) from vendor/k8s.io/client-go/informers/factory.go:134
I0907 05:52:29.845637       1 reflector.go:255] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0907 05:52:29.846016       1 reflector.go:219] Starting reflector *v1.Secret (22h14m2.170943067s) from vendor/k8s.io/client-go/informers/factory.go:134
I0907 05:52:29.846042       1 reflector.go:255] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0907 05:52:29.846291       1 shared_informer.go:255] Waiting for caches to sync for tokens
W0907 05:52:29.870903       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0907 05:52:29.871118       1 controllermanager.go:564] Starting "attachdetach"
I0907 05:52:29.878173       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0907 05:52:29.878203       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0907 05:52:29.878219       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0907 05:52:29.878232       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
I0907 05:52:29.878247       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0907 05:52:29.878262       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0907 05:52:29.878290       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 05:52:29.878446       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0907 05:52:29.878481       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0907 05:52:29.878497       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0907 05:52:29.878567       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 05:52:29.878613       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 05:52:29.879319       1 controllermanager.go:593] Started "attachdetach"
I0907 05:52:29.879346       1 controllermanager.go:564] Starting "podgc"
I0907 05:52:29.879524       1 attach_detach_controller.go:328] Starting attach detach controller
I0907 05:52:29.879777       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0907 05:52:29.887768       1 controllermanager.go:593] Started "podgc"
... skipping 185 lines ...
I0907 05:52:32.499831       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0907 05:52:32.499849       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0907 05:52:32.499863       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0907 05:52:32.499880       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0907 05:52:32.499909       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0907 05:52:32.499932       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0907 05:52:32.499957       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 05:52:32.500288       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 05:52:32.500435       1 controllermanager.go:593] Started "persistentvolume-binder"
I0907 05:52:32.500511       1 controllermanager.go:564] Starting "clusterrole-aggregation"
I0907 05:52:32.500609       1 pv_controller_base.go:311] Starting persistent volume controller
I0907 05:52:32.500623       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0907 05:52:32.649774       1 controllermanager.go:593] Started "clusterrole-aggregation"
... skipping 22 lines ...
I0907 05:52:33.249628       1 shared_informer.go:255] Waiting for caches to sync for crt configmap
I0907 05:52:33.401055       1 controllermanager.go:593] Started "endpoint"
I0907 05:52:33.401091       1 controllermanager.go:564] Starting "horizontalpodautoscaling"
I0907 05:52:33.401390       1 endpoints_controller.go:178] Starting endpoint controller
I0907 05:52:33.401417       1 shared_informer.go:255] Waiting for caches to sync for endpoint
I0907 05:52:33.683762       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-control-plane-rfj2h"
W0907 05:52:33.684452       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-tw80t5-control-plane-rfj2h" does not exist
I0907 05:52:33.684333       1 topologycache.go:183] Ignoring node capz-tw80t5-control-plane-rfj2h because it is not ready: [{MemoryPressure False 2022-09-07 05:52:13 +0000 UTC 2022-09-07 05:52:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 05:52:13 +0000 UTC 2022-09-07 05:52:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 05:52:13 +0000 UTC 2022-09-07 05:52:13 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 05:52:13 +0000 UTC 2022-09-07 05:52:13 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized, missing node capacity for resources: ephemeral-storage]}]
I0907 05:52:33.684842       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0907 05:52:33.708044       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-control-plane-rfj2h"
I0907 05:52:33.716860       1 controllermanager.go:593] Started "horizontalpodautoscaling"
I0907 05:52:33.717271       1 controllermanager.go:564] Starting "csrcleaner"
I0907 05:52:33.718930       1 horizontal.go:168] Starting HPA controller
I0907 05:52:33.719229       1 shared_informer.go:255] Waiting for caches to sync for HPA
... skipping 347 lines ...
I0907 05:52:35.318275       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 05:52:35.318465       1 endpoints_controller.go:365] Finished syncing service "kube-system/kube-dns" endpoints. (51.889604ms)
I0907 05:52:35.318553       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 05:52:35.298789518 +0000 UTC m=+17.871717226 - now: 2022-09-07 05:52:35.31854501 +0000 UTC m=+17.891472818]
I0907 05:52:35.330638       1 azure_backoff.go:110] VirtualMachinesClient.List(capz-tw80t5) success
I0907 05:52:35.332613       1 daemon_controller.go:226] Adding daemon set kube-proxy
I0907 05:52:35.343236       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="122.096486ms"
I0907 05:52:35.343308       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:52:35.343352       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 05:52:35.34333315 +0000 UTC m=+17.916260858"
I0907 05:52:35.344261       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 05:52:35 +0000 UTC - now: 2022-09-07 05:52:35.344253259 +0000 UTC m=+17.917180967]
I0907 05:52:35.387915       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="44.558333ms"
I0907 05:52:35.387973       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 05:52:35.387952984 +0000 UTC m=+17.960880792"
I0907 05:52:35.388582       1 disruption.go:415] addPod called on pod "coredns-6d4b75cb6d-7npfl"
I0907 05:52:35.388665       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-6d4b75cb6d-7npfl, PodDisruptionBudget controller will avoid syncing.
... skipping 9 lines ...
I0907 05:52:35.392392       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/coredns-6d4b75cb6d", timestamp:time.Time{wall:0xc0be2a48d1a2f1bd, ext:17868819097, loc:(*time.Location)(0x7249da0)}}
I0907 05:52:35.407923       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-tw80t5-control-plane-rfj2h"
I0907 05:52:35.407964       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-tw80t5-control-plane-rfj2h"
I0907 05:52:35.408198       1 endpointslice_controller.go:315] Finished syncing service "kube-system/kube-dns" endpoint slices. (142.013579ms)
I0907 05:52:35.408324       1 endpointslice_controller.go:315] Finished syncing service "kube-system/kube-dns" endpoint slices. (99.801µs)
I0907 05:52:35.408416       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="20.451798ms"
I0907 05:52:35.408438       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:52:35.408471       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 05:52:35.408455983 +0000 UTC m=+17.981383691"
I0907 05:52:35.410181       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 05:52:35 +0000 UTC - now: 2022-09-07 05:52:35.4101699 +0000 UTC m=+17.983097608]
I0907 05:52:35.410281       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0907 05:52:35.410359       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.893918ms"
I0907 05:52:35.413760       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 05:52:35.413717734 +0000 UTC m=+17.986645542"
I0907 05:52:35.415197       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 05:52:35 +0000 UTC - now: 2022-09-07 05:52:35.415167948 +0000 UTC m=+17.988095656]
... skipping 250 lines ...
I0907 05:52:41.042083       1 disruption.go:427] updatePod called on pod "metrics-server-7d674f87b8-jd8w9"
I0907 05:52:41.042226       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-7d674f87b8-jd8w9, PodDisruptionBudget controller will avoid syncing.
I0907 05:52:41.042625       1 disruption.go:430] No matching pdb for pod "metrics-server-7d674f87b8-jd8w9"
I0907 05:52:41.042428       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/metrics-server-7d674f87b8-jd8w9" podUID=69e6db3d-e7e4-4baf-a81c-e4aa887f0922
I0907 05:52:41.042453       1 replica_set.go:457] Pod metrics-server-7d674f87b8-jd8w9 updated, objectMeta {Name:metrics-server-7d674f87b8-jd8w9 GenerateName:metrics-server-7d674f87b8- Namespace:kube-system SelfLink: UID:69e6db3d-e7e4-4baf-a81c-e4aa887f0922 ResourceVersion:401 Generation:0 CreationTimestamp:2022-09-07 05:52:40 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:7d674f87b8] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-7d674f87b8 UID:a9d1d91d-8d0b-4cd7-a05c-0c1b3ac6c797 Controller:0xc001f64dc7 BlockOwnerDeletion:0xc001f64dc8}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 05:52:40 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9d1d91d-8d0b-4cd7-a05c-0c1b3ac6c797\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:metrics-server-7d674f87b8-jd8w9 GenerateName:metrics-server-7d674f87b8- Namespace:kube-system SelfLink: UID:69e6db3d-e7e4-4baf-a81c-e4aa887f0922 ResourceVersion:408 Generation:0 CreationTimestamp:2022-09-07 05:52:40 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:7d674f87b8] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-7d674f87b8 UID:a9d1d91d-8d0b-4cd7-a05c-0c1b3ac6c797 Controller:0xc001ed81be BlockOwnerDeletion:0xc001ed81bf}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 05:52:40 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9d1d91d-8d0b-4cd7-a05c-0c1b3ac6c797\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 05:52:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0907 05:52:41.057063       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="167.134517ms"
I0907 05:52:41.057571       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:52:41.057769       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-07 05:52:41.057749954 +0000 UTC m=+23.630677662"
I0907 05:52:41.057866       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-7d674f87b8"
I0907 05:52:41.065312       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-7d674f87b8" (23.829591ms)
I0907 05:52:41.065522       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-7d674f87b8", timestamp:time.Time{wall:0xc0be2a4a36c4f3c1, ext:23491804929, loc:(*time.Location)(0x7249da0)}}
I0907 05:52:41.065923       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-7d674f87b8" (398.199µs)
I0907 05:52:41.130974       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
... skipping 34 lines ...
I0907 05:52:42.493277       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-7867496574-9r98j" podUID=14098246-a638-4434-8a56-712972856071
I0907 05:52:42.493293       1 replica_set.go:394] Pod calico-kube-controllers-7867496574-9r98j created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-kube-controllers-7867496574-9r98j", GenerateName:"calico-kube-controllers-7867496574-", Namespace:"kube-system", SelfLink:"", UID:"14098246-a638-4434-8a56-712972856071", ResourceVersion:"469", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 5, 52, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"7867496574"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-kube-controllers-7867496574", UID:"5d1386bb-696a-4da4-822a-410697d61d0f", Controller:(*bool)(0xc0020b662e), BlockOwnerDeletion:(*bool)(0xc0020b662f)}}, Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 5, 52, 42, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0020b2b58), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-mldtg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000e510a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-kube-controllers", Image:"docker.io/calico/kube-controllers:v3.23.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ENABLED_CONTROLLERS", Value:"node", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mldtg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0020a8980), ReadinessProbe:(*v1.Probe)(0xc0020a89c0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020b66e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-kube-controllers", DeprecatedServiceAccount:"calico-kube-controllers", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b41650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020b6740)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020b6760)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0020b6768), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020b676c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00220d2e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 05:52:42.495261       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-7867496574", timestamp:time.Time{wall:0xc0be2a4a9c39900b, ext:25046462283, loc:(*time.Location)(0x7249da0)}}
I0907 05:52:42.493149       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/calico-kube-controllers-7867496574-9r98j"
I0907 05:52:42.494285       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-7867496574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-7867496574-9r98j"
I0907 05:52:42.505252       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="46.180989ms"
I0907 05:52:42.506255       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:52:42.506572       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 05:52:42.506551067 +0000 UTC m=+25.079478775"
I0907 05:52:42.507342       1 deployment_util.go:774] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-07 05:52:42 +0000 UTC - now: 2022-09-07 05:52:42.507316767 +0000 UTC m=+25.080244475]
I0907 05:52:42.511795       1 disruption.go:384] add DB "calico-kube-controllers"
I0907 05:52:42.545595       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-7867496574" (72.442283ms)
I0907 05:52:42.545889       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-7867496574", timestamp:time.Time{wall:0xc0be2a4a9c39900b, ext:25046462283, loc:(*time.Location)(0x7249da0)}}
I0907 05:52:42.546162       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-7867496574, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
... skipping 29 lines ...
I0907 05:52:42.694297       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be2a4aa9620987, ext:25267218531, loc:(*time.Location)(0x7249da0)}}
I0907 05:52:42.694332       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0907 05:52:42.694485       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0907 05:52:42.694561       1 daemon_controller.go:1112] Updating daemon set status
I0907 05:52:42.695416       1 event.go:294] "Event occurred" object="kube-system/calico-node" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-vj7tp"
I0907 05:52:42.737274       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (110.912774ms)
E0907 05:52:42.737341       1 disruption.go:534] Error syncing PodDisruptionBudget kube-system/calico-kube-controllers, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy "calico-kube-controllers": the object has been modified; please apply your changes to the latest version and try again
I0907 05:52:42.737603       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (216.3µs)
I0907 05:52:42.745102       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (61.8µs)
I0907 05:52:42.760179       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-7867496574"
I0907 05:52:42.760406       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 05:52:42.760300808 +0000 UTC m=+25.333228516"
I0907 05:52:42.760902       1 daemon_controller.go:247] Updating daemon set calico-node
I0907 05:52:42.761368       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-7867496574" (215.481849ms)
... skipping 225 lines ...
I0907 05:53:04.577691       1 resource_quota_monitor.go:298] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0907 05:53:04.685486       1 resource_quota_monitor.go:298] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0907 05:53:04.754721       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-08xdz1" (18.9µs)
I0907 05:53:04.778527       1 shared_informer.go:285] caches populated
I0907 05:53:04.778595       1 shared_informer.go:262] Caches are synced for resource quota
I0907 05:53:04.778609       1 resource_quota_controller.go:458] synced quota controller
W0907 05:53:05.034861       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 05:53:05.035180       1 garbagecollector.go:215] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0907 05:53:05.035198       1 garbagecollector.go:221] reset restmapper
E0907 05:53:05.063315       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0907 05:53:05.068183       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0907 05:53:05.069739       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations", kind "crd.projectcalico.org/v1, Kind=KubeControllersConfiguration"
I0907 05:53:05.069829       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=caliconodestatuses", kind "crd.projectcalico.org/v1, Kind=CalicoNodeStatus"
... skipping 128 lines ...
I0907 05:53:06.926668       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-6d4b75cb6d-7npfl, PodDisruptionBudget controller will avoid syncing.
I0907 05:53:06.926695       1 disruption.go:430] No matching pdb for pod "coredns-6d4b75cb6d-7npfl"
I0907 05:53:06.926814       1 replica_set.go:457] Pod coredns-6d4b75cb6d-7npfl updated, objectMeta {Name:coredns-6d4b75cb6d-7npfl GenerateName:coredns-6d4b75cb6d- Namespace:kube-system SelfLink: UID:7b4c3c83-927e-418d-9a87-525cc1827b18 ResourceVersion:558 Generation:0 CreationTimestamp:2022-09-07 05:52:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:6d4b75cb6d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-6d4b75cb6d UID:508105eb-4d53-45f6-acbc-49043a0dc97b Controller:0xc00215e587 BlockOwnerDeletion:0xc00215e588}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 05:52:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"508105eb-4d53-45f6-acbc-49043a0dc97b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 05:52:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-6d4b75cb6d-7npfl GenerateName:coredns-6d4b75cb6d- Namespace:kube-system SelfLink: UID:7b4c3c83-927e-418d-9a87-525cc1827b18 ResourceVersion:567 Generation:0 CreationTimestamp:2022-09-07 05:52:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:6d4b75cb6d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-6d4b75cb6d UID:508105eb-4d53-45f6-acbc-49043a0dc97b Controller:0xc000fbbe37 BlockOwnerDeletion:0xc000fbbe38}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 05:52:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"508105eb-4d53-45f6-acbc-49043a0dc97b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 05:52:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 05:53:06 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0907 05:53:06.926985       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-6d4b75cb6d", timestamp:time.Time{wall:0xc0be2a48d1a2f1bd, ext:17868819097, loc:(*time.Location)(0x7249da0)}}
I0907 05:53:06.927135       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-6d4b75cb6d" (154.801µs)
I0907 05:53:07.520389       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="368.702µs" userAgent="kube-probe/1.24+" audit-ID="" srcIP="127.0.0.1:53774" resp=200
I0907 05:53:09.470780       1 node_lifecycle_controller.go:1040] ReadyCondition for Node capz-tw80t5-control-plane-rfj2h transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 05:52:46 +0000 UTC,LastTransitionTime:2022-09-07 05:52:13 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 05:53:06 +0000 UTC,LastTransitionTime:2022-09-07 05:53:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 05:53:09.470872       1 node_lifecycle_controller.go:1048] Node capz-tw80t5-control-plane-rfj2h ReadyCondition updated. Updating timestamp.
I0907 05:53:09.470903       1 node_lifecycle_controller.go:894] Node capz-tw80t5-control-plane-rfj2h is healthy again, removing all taints
I0907 05:53:09.470924       1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0907 05:53:10.105208       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-control-plane-rfj2h"
I0907 05:53:10.143757       1 disruption.go:427] updatePod called on pod "calico-node-vj7tp"
I0907 05:53:10.143846       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-vj7tp, PodDisruptionBudget controller will avoid syncing.
... skipping 175 lines ...
I0907 05:53:34.354083       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:53:34.391263       1 gc_controller.go:214] GC'ing orphaned
I0907 05:53:34.391299       1 gc_controller.go:277] GC'ing unscheduled pods which are terminating.
I0907 05:53:34.407021       1 pv_controller_base.go:605] resyncing PV controller
E0907 05:53:34.797846       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0907 05:53:34.797944       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0907 05:53:36.136384       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 05:53:37.383231       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-control-plane-rfj2h"
I0907 05:53:37.517615       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="149.601µs" userAgent="kube-probe/1.24+" audit-ID="" srcIP="127.0.0.1:40134" resp=200
I0907 05:53:39.474625       1 node_lifecycle_controller.go:1048] Node capz-tw80t5-control-plane-rfj2h ReadyCondition updated. Updating timestamp.
I0907 05:53:47.344481       1 disruption.go:427] updatePod called on pod "metrics-server-7d674f87b8-jd8w9"
I0907 05:53:47.344565       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-7d674f87b8-jd8w9, PodDisruptionBudget controller will avoid syncing.
I0907 05:53:47.344577       1 disruption.go:430] No matching pdb for pod "metrics-server-7d674f87b8-jd8w9"
... skipping 86 lines ...
I0907 05:54:04.334048       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:54:04.355753       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:54:04.409254       1 pv_controller_base.go:605] resyncing PV controller
I0907 05:54:04.819366       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 05:54:07.516769       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="101.3µs" userAgent="kube-probe/1.24+" audit-ID="" srcIP="127.0.0.1:44410" resp=200
I0907 05:54:10.562847       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-md-0-qdmlk"
W0907 05:54:10.563027       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-tw80t5-md-0-qdmlk" does not exist
I0907 05:54:10.563183       1 topologycache.go:179] Ignoring node capz-tw80t5-control-plane-rfj2h because it has an excluded label
I0907 05:54:10.563194       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-tw80t5-md-0-qdmlk}
I0907 05:54:10.563417       1 taint_manager.go:441] "Updating known taints on node" node="capz-tw80t5-md-0-qdmlk" taints=[]
I0907 05:54:10.563557       1 topologycache.go:183] Ignoring node capz-tw80t5-md-0-qdmlk because it is not ready: [{MemoryPressure False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-tw80t5-md-0-qdmlk" not found]}]
I0907 05:54:10.563733       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0907 05:54:10.566949       1 controller.go:697] Ignoring node capz-tw80t5-md-0-qdmlk with Ready condition status False
I0907 05:54:10.567049       1 controller.go:272] Triggering nodeSync
I0907 05:54:10.567136       1 controller.go:291] nodeSync has been triggered
I0907 05:54:10.567246       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 05:54:10.567359       1 controller.go:808] Finished updateLoadBalancerHosts
... skipping 89 lines ...
I0907 05:54:10.703385       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be2a60a9ec1f8e, ext:113276268238, loc:(*time.Location)(0x7249da0)}}
I0907 05:54:10.703421       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0907 05:54:10.703570       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0907 05:54:10.703675       1 daemon_controller.go:1112] Updating daemon set status
I0907 05:54:10.703899       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (2.485609ms)
I0907 05:54:12.210819       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-md-0-pxbfd"
W0907 05:54:12.211213       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-tw80t5-md-0-pxbfd" does not exist
I0907 05:54:12.211483       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-tw80t5-md-0-pxbfd}
I0907 05:54:12.211742       1 taint_manager.go:441] "Updating known taints on node" node="capz-tw80t5-md-0-pxbfd" taints=[]
I0907 05:54:12.213622       1 topologycache.go:179] Ignoring node capz-tw80t5-control-plane-rfj2h because it has an excluded label
I0907 05:54:12.213655       1 topologycache.go:183] Ignoring node capz-tw80t5-md-0-qdmlk because it is not ready: [{MemoryPressure False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 05:54:10 +0000 UTC 2022-09-07 05:54:10 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-tw80t5-md-0-qdmlk" not found]}]
I0907 05:54:12.214465       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be2a60a91fd35b, ext:113262879387, loc:(*time.Location)(0x7249da0)}}
I0907 05:54:12.215058       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be2a610cd16646, ext:114787977606, loc:(*time.Location)(0x7249da0)}}
I0907 05:54:12.215279       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-tw80t5-md-0-pxbfd], creating 1
I0907 05:54:12.215303       1 topologycache.go:183] Ignoring node capz-tw80t5-md-0-pxbfd because it is not ready: [{MemoryPressure False 2022-09-07 05:54:12 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 05:54:12 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 05:54:12 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 05:54:12 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-tw80t5-md-0-pxbfd" not found]}]
I0907 05:54:12.216477       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0907 05:54:12.215351       1 controller.go:697] Ignoring node capz-tw80t5-md-0-qdmlk with Ready condition status False
I0907 05:54:12.216891       1 controller.go:697] Ignoring node capz-tw80t5-md-0-pxbfd with Ready condition status False
I0907 05:54:12.217192       1 controller.go:272] Triggering nodeSync
I0907 05:54:12.216920       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be2a60a9ec1f8e, ext:113276268238, loc:(*time.Location)(0x7249da0)}}
I0907 05:54:12.217676       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be2a610cf95fdb, ext:114790597403, loc:(*time.Location)(0x7249da0)}}
... skipping 438 lines ...
I0907 05:54:40.638326       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tw80t5/providers/Microsoft.Compute/virtualMachines/capz-tw80t5-md-0-pxbfd), assuming it is managed by availability set: not a vmss instance
I0907 05:54:40.638360       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-tw80t5-md-0-pxbfd"
I0907 05:54:40.638373       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-tw80t5-md-0-pxbfd"
I0907 05:54:40.974547       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-tw80t5-md-0-qdmlk"
I0907 05:54:40.975367       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-md-0-qdmlk"
I0907 05:54:40.975540       1 topologycache.go:179] Ignoring node capz-tw80t5-control-plane-rfj2h because it has an excluded label
I0907 05:54:40.975922       1 topologycache.go:183] Ignoring node capz-tw80t5-md-0-pxbfd because it is not ready: [{NetworkUnavailable False 2022-09-07 05:54:37 +0000 UTC 2022-09-07 05:54:37 +0000 UTC CalicoIsUp Calico is running on this node} {MemoryPressure False 2022-09-07 05:54:32 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 05:54:32 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 05:54:32 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 05:54:32 +0000 UTC 2022-09-07 05:54:12 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0907 05:54:40.976197       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0907 05:54:40.975595       1 controller.go:697] Ignoring node capz-tw80t5-md-0-pxbfd with Ready condition status False
I0907 05:54:40.976584       1 controller.go:265] Node changes detected, triggering a full node sync on all loadbalancer services
I0907 05:54:40.976844       1 controller.go:272] Triggering nodeSync
I0907 05:54:40.977038       1 controller.go:291] nodeSync has been triggered
I0907 05:54:40.977308       1 controller.go:757] Syncing backends for all LB services.
... skipping 14 lines ...
I0907 05:54:42.722728       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 05:54:42.722798       1 controller.go:808] Finished updateLoadBalancerHosts
I0907 05:54:42.722833       1 controller.go:764] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 05:54:42.722842       1 controller.go:735] It took 0.0001276 seconds to finish nodeSyncInternal
I0907 05:54:42.749350       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-md-0-pxbfd"
I0907 05:54:42.754028       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-tw80t5-md-0-pxbfd" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0907 05:54:44.486492       1 node_lifecycle_controller.go:1040] ReadyCondition for Node capz-tw80t5-md-0-qdmlk transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 05:54:30 +0000 UTC,LastTransitionTime:2022-09-07 05:54:10 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 05:54:40 +0000 UTC,LastTransitionTime:2022-09-07 05:54:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 05:54:44.486639       1 node_lifecycle_controller.go:1048] Node capz-tw80t5-md-0-qdmlk ReadyCondition updated. Updating timestamp.
I0907 05:54:44.503743       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-md-0-qdmlk"
I0907 05:54:44.503919       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-tw80t5-md-0-qdmlk}
I0907 05:54:44.503973       1 taint_manager.go:441] "Updating known taints on node" node="capz-tw80t5-md-0-qdmlk" taints=[]
I0907 05:54:44.504002       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-tw80t5-md-0-qdmlk"
I0907 05:54:44.504774       1 node_lifecycle_controller.go:894] Node capz-tw80t5-md-0-qdmlk is healthy again, removing all taints
I0907 05:54:44.504877       1 node_lifecycle_controller.go:1040] ReadyCondition for Node capz-tw80t5-md-0-pxbfd transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 05:54:32 +0000 UTC,LastTransitionTime:2022-09-07 05:54:12 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 05:54:42 +0000 UTC,LastTransitionTime:2022-09-07 05:54:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 05:54:44.504958       1 node_lifecycle_controller.go:1048] Node capz-tw80t5-md-0-pxbfd ReadyCondition updated. Updating timestamp.
I0907 05:54:44.521942       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tw80t5-md-0-pxbfd"
I0907 05:54:44.522072       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-tw80t5-md-0-pxbfd}
I0907 05:54:44.522158       1 taint_manager.go:441] "Updating known taints on node" node="capz-tw80t5-md-0-pxbfd" taints=[]
I0907 05:54:44.522356       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-tw80t5-md-0-pxbfd"
I0907 05:54:44.523466       1 node_lifecycle_controller.go:894] Node capz-tw80t5-md-0-pxbfd is healthy again, removing all taints
... skipping 35 lines ...
I0907 05:54:46.995004       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0907 05:54:46.995614       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-07 05:54:46.982793554 +0000 UTC m=+149.555721262 - now: 2022-09-07 05:54:46.995607805 +0000 UTC m=+149.568535513]
I0907 05:54:46.999987       1 disruption.go:415] addPod called on pod "csi-azurefile-controller-78f78cfdd5-f4xcp"
I0907 05:54:47.001129       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-78f78cfdd5-f4xcp, PodDisruptionBudget controller will avoid syncing.
I0907 05:54:47.001306       1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-78f78cfdd5-f4xcp"
I0907 05:54:47.002575       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-78f78cfdd5-f4xcp"
I0907 05:54:47.001446       1 replica_set.go:394] Pod csi-azurefile-controller-78f78cfdd5-f4xcp created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-78f78cfdd5-f4xcp", GenerateName:"csi-azurefile-controller-78f78cfdd5-", Namespace:"kube-system", SelfLink:"", UID:"76e807fd-adb7-489c-827d-ecd398572d9d", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 5, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"78f78cfdd5"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-78f78cfdd5", UID:"1253a0da-5905-40dc-8204-a23e63746165", Controller:(*bool)(0xc002879887), BlockOwnerDeletion:(*bool)(0xc002879888)}}, Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 5, 54, 46, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0026e9ea8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0026e9ec0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026e9ed8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-ttbcr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0020c9fa0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ttbcr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ttbcr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ttbcr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ttbcr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ttbcr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0029ac100)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ttbcr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002d56580), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002879f10), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b44850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002879f80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002879fa0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002879fa8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002879fac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002c66c50), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 05:54:47.001263       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-78f78cfdd5-f4xcp" podUID=76e807fd-adb7-489c-827d-ecd398572d9d
I0907 05:54:47.003798       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-78f78cfdd5", timestamp:time.Time{wall:0xc0be2a69ba85b116, ext:149554767958, loc:(*time.Location)(0x7249da0)}}
I0907 05:54:47.006550       1 controller_utils.go:581] Controller csi-azurefile-controller-78f78cfdd5 created pod csi-azurefile-controller-78f78cfdd5-f4xcp
I0907 05:54:47.007253       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-78f78cfdd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-78f78cfdd5-f4xcp"
I0907 05:54:47.019621       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="64.788857ms"
I0907 05:54:47.019863       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:54:47.020138       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-07 05:54:47.020049602 +0000 UTC m=+149.592977410"
I0907 05:54:47.024106       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-07 05:54:46 +0000 UTC - now: 2022-09-07 05:54:47.024088118 +0000 UTC m=+149.597015926]
I0907 05:54:47.030977       1 disruption.go:415] addPod called on pod "csi-azurefile-controller-78f78cfdd5-67k9b"
I0907 05:54:47.032480       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-78f78cfdd5-67k9b, PodDisruptionBudget controller will avoid syncing.
I0907 05:54:47.035265       1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-78f78cfdd5-67k9b"
I0907 05:54:47.033193       1 controller_utils.go:581] Controller csi-azurefile-controller-78f78cfdd5 created pod csi-azurefile-controller-78f78cfdd5-67k9b
I0907 05:54:47.033226       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-78f78cfdd5-67k9b" podUID=b91d8dac-e2cf-45a4-a3e8-4be7a0615305
I0907 05:54:47.033262       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-78f78cfdd5-67k9b"
I0907 05:54:47.033309       1 replica_set.go:394] Pod csi-azurefile-controller-78f78cfdd5-67k9b created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-78f78cfdd5-67k9b", GenerateName:"csi-azurefile-controller-78f78cfdd5-", Namespace:"kube-system", SelfLink:"", UID:"b91d8dac-e2cf-45a4-a3e8-4be7a0615305", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 5, 54, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"78f78cfdd5"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-78f78cfdd5", UID:"1253a0da-5905-40dc-8204-a23e63746165", Controller:(*bool)(0xc002982007), BlockOwnerDeletion:(*bool)(0xc002982008)}}, Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 5, 54, 47, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028ddf80), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0028ddf98), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0028ddfb0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-tvrlw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0029acae0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tvrlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tvrlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tvrlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tvrlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tvrlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0029acc00)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tvrlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002df7f80), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029823b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b40b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002982420)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002982440)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002982448), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00298244c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002e81420), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 05:54:47.036399       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-78f78cfdd5", timestamp:time.Time{wall:0xc0be2a69ba85b116, ext:149554767958, loc:(*time.Location)(0x7249da0)}}
I0907 05:54:47.036011       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-78f78cfdd5, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0907 05:54:47.036037       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-78f78cfdd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-78f78cfdd5-67k9b"
I0907 05:54:47.049535       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="29.470116ms"
I0907 05:54:47.050973       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-07 05:54:47.050954024 +0000 UTC m=+149.623881732"
I0907 05:54:47.053032       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-07 05:54:46 +0000 UTC - now: 2022-09-07 05:54:47.052965432 +0000 UTC m=+149.625893140]
... skipping 222 lines ...
I0907 05:54:50.499096       1 disruption.go:427] updatePod called on pod "csi-snapshot-controller-8545756757-2kzw2"
I0907 05:54:50.499156       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-8545756757-2kzw2, PodDisruptionBudget controller will avoid syncing.
I0907 05:54:50.499169       1 disruption.go:430] No matching pdb for pod "csi-snapshot-controller-8545756757-2kzw2"
I0907 05:54:50.499257       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-snapshot-controller-8545756757-2kzw2"
I0907 05:54:50.499340       1 replica_set.go:457] Pod csi-snapshot-controller-8545756757-2kzw2 updated, objectMeta {Name:csi-snapshot-controller-8545756757-2kzw2 GenerateName:csi-snapshot-controller-8545756757- Namespace:kube-system SelfLink: UID:e896c458-713d-4db2-983b-8b595426a4b0 ResourceVersion:994 Generation:0 CreationTimestamp:2022-09-07 05:54:50 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:8545756757] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-8545756757 UID:0fbbb1af-522b-4d5e-bbb2-8f55c7f1e6e6 Controller:0xc000f07737 BlockOwnerDeletion:0xc000f07738}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 05:54:50 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fbbb1af-522b-4d5e-bbb2-8f55c7f1e6e6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-8545756757-2kzw2 GenerateName:csi-snapshot-controller-8545756757- Namespace:kube-system SelfLink: UID:e896c458-713d-4db2-983b-8b595426a4b0 ResourceVersion:997 Generation:0 CreationTimestamp:2022-09-07 05:54:50 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:8545756757] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-8545756757 UID:0fbbb1af-522b-4d5e-bbb2-8f55c7f1e6e6 Controller:0xc00082e117 BlockOwnerDeletion:0xc00082e118}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 05:54:50 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fbbb1af-522b-4d5e-bbb2-8f55c7f1e6e6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]}.
I0907 05:54:50.514540       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="176.839902ms"
I0907 05:54:50.514599       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:54:50.514640       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-07 05:54:50.514621863 +0000 UTC m=+153.087549571"
I0907 05:54:50.515145       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-07 05:54:50 +0000 UTC - now: 2022-09-07 05:54:50.515137365 +0000 UTC m=+153.088065073]
I0907 05:54:50.515758       1 controller_utils.go:581] Controller csi-snapshot-controller-8545756757 created pod csi-snapshot-controller-8545756757-7h6cf
I0907 05:54:50.515830       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-8545756757, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0907 05:54:50.516297       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-8545756757" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-8545756757-7h6cf"
I0907 05:54:50.523020       1 disruption.go:415] addPod called on pod "csi-snapshot-controller-8545756757-7h6cf"
... skipping 1654 lines ...
I0907 06:00:34.902196       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-58trw-794dc7c664", timestamp:time.Time{wall:0xc0be2ac0b4cadb0b, ext:497458637387, loc:(*time.Location)(0x7249da0)}}
I0907 06:00:34.900462       1 disruption.go:415] addPod called on pod "azurefile-volume-tester-58trw-794dc7c664-hx7g6"
I0907 06:00:34.902791       1 disruption.go:490] No PodDisruptionBudgets found for pod azurefile-volume-tester-58trw-794dc7c664-hx7g6, PodDisruptionBudget controller will avoid syncing.
I0907 06:00:34.903082       1 disruption.go:418] No matching pdb for pod "azurefile-volume-tester-58trw-794dc7c664-hx7g6"
I0907 06:00:34.902969       1 event.go:294] "Event occurred" object="azurefile-1563/azurefile-volume-tester-58trw-794dc7c664" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azurefile-volume-tester-58trw-794dc7c664-hx7g6"
I0907 06:00:34.903633       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" duration="27.557573ms"
I0907 06:00:34.905475       1 deployment_controller.go:490] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-58trw\": the object has been modified; please apply your changes to the latest version and try again"
I0907 06:00:34.905686       1 deployment_controller.go:576] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" startTime="2022-09-07 06:00:34.905665132 +0000 UTC m=+497.478592840"
I0907 06:00:34.906606       1 deployment_util.go:774] Deployment "azurefile-volume-tester-58trw" timed out (false) [last progress check: 2022-09-07 06:00:34 +0000 UTC - now: 2022-09-07 06:00:34.906597135 +0000 UTC m=+497.479524843]
I0907 06:00:34.921408       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azurefile-1563/azurefile-volume-tester-58trw-794dc7c664"
I0907 06:00:34.926429       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-58trw-794dc7c664" (41.032109ms)
I0907 06:00:34.926850       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-58trw-794dc7c664", timestamp:time.Time{wall:0xc0be2ac0b4cadb0b, ext:497458637387, loc:(*time.Location)(0x7249da0)}}
I0907 06:00:34.927127       1 replica_set_utils.go:59] Updating status for : azurefile-1563/azurefile-volume-tester-58trw-794dc7c664, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 10 lines ...
I0907 06:00:34.943939       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-58trw-794dc7c664", timestamp:time.Time{wall:0xc0be2ac0b4cadb0b, ext:497458637387, loc:(*time.Location)(0x7249da0)}}
I0907 06:00:34.944152       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-58trw-794dc7c664" (258.7µs)
I0907 06:00:34.944277       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-58trw-794dc7c664", timestamp:time.Time{wall:0xc0be2ac0b4cadb0b, ext:497458637387, loc:(*time.Location)(0x7249da0)}}
I0907 06:00:34.944477       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azurefile-1563/azurefile-volume-tester-58trw-794dc7c664"
I0907 06:00:34.944619       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-58trw-794dc7c664" (345.101µs)
I0907 06:00:34.947937       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" duration="17.916647ms"
I0907 06:00:34.948240       1 deployment_controller.go:490] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-58trw\": the object has been modified; please apply your changes to the latest version and try again"
I0907 06:00:34.948500       1 deployment_controller.go:576] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" startTime="2022-09-07 06:00:34.948420246 +0000 UTC m=+497.521347954"
I0907 06:00:34.956179       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" duration="7.74482ms"
I0907 06:00:34.956234       1 deployment_controller.go:576] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" startTime="2022-09-07 06:00:34.956215066 +0000 UTC m=+497.529142774"
I0907 06:00:34.957155       1 deployment_controller.go:176] "Updating deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw"
I0907 06:00:34.963676       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" duration="7.44372ms"
I0907 06:00:34.963965       1 deployment_controller.go:490] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-58trw\": the object has been modified; please apply your changes to the latest version and try again"
I0907 06:00:34.964367       1 deployment_controller.go:576] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" startTime="2022-09-07 06:00:34.964343688 +0000 UTC m=+497.537271396"
I0907 06:00:34.965545       1 deployment_util.go:774] Deployment "azurefile-volume-tester-58trw" timed out (false) [last progress check: 2022-09-07 06:00:34 +0000 UTC - now: 2022-09-07 06:00:34.965501891 +0000 UTC m=+497.538429699]
I0907 06:00:34.965952       1 progress.go:195] Queueing up deployment "azurefile-volume-tester-58trw" for a progress check after 599s
I0907 06:00:34.966181       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" duration="1.808405ms"
I0907 06:00:34.969607       1 deployment_controller.go:576] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-58trw" startTime="2022-09-07 06:00:34.969587302 +0000 UTC m=+497.542515010"
I0907 06:00:34.970566       1 deployment_util.go:774] Deployment "azurefile-volume-tester-58trw" timed out (false) [last progress check: 2022-09-07 06:00:34 +0000 UTC - now: 2022-09-07 06:00:34.970558105 +0000 UTC m=+497.543485813]
... skipping 1177 lines ...
I0907 06:03:20.350938       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azurefile-1359
2022/09/07 06:03:20 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 334.549 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-wgn99, container manager
STEP: Dumping workload cluster default/capz-tw80t5 logs
Sep  7 06:04:49.177: INFO: Collecting logs for Linux node capz-tw80t5-control-plane-rfj2h in cluster capz-tw80t5 in namespace default

Sep  7 06:05:49.178: INFO: Collecting boot logs for AzureMachine capz-tw80t5-control-plane-rfj2h

Failed to get logs for machine capz-tw80t5-control-plane-5h9zv, cluster default/capz-tw80t5: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 06:05:50.364: INFO: Collecting logs for Linux node capz-tw80t5-md-0-pxbfd in cluster capz-tw80t5 in namespace default

Sep  7 06:06:50.366: INFO: Collecting boot logs for AzureMachine capz-tw80t5-md-0-pxbfd

Failed to get logs for machine capz-tw80t5-md-0-68d7fddfb-7fssm, cluster default/capz-tw80t5: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 06:06:50.961: INFO: Collecting logs for Linux node capz-tw80t5-md-0-qdmlk in cluster capz-tw80t5 in namespace default

Sep  7 06:07:50.962: INFO: Collecting boot logs for AzureMachine capz-tw80t5-md-0-qdmlk

Failed to get logs for machine capz-tw80t5-md-0-68d7fddfb-bc2fg, cluster default/capz-tw80t5: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-tw80t5 kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-pppdp, container calico-node
STEP: Fetching kube-system pod logs took 394.851562ms
STEP: Dumping workload cluster default/capz-tw80t5 Azure activity log
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-9w87g, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/calico-node-kp7np, container calico-node
STEP: Collecting events for Pod kube-system/calico-kube-controllers-7867496574-9r98j
STEP: Collecting events for Pod kube-system/coredns-6d4b75cb6d-7npfl
STEP: Creating log watcher for controller kube-system/coredns-6d4b75cb6d-vz8fl, container coredns
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-f4xcp, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-7867496574-9r98j, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/etcd-capz-tw80t5-control-plane-rfj2h
STEP: failed to find events of Pod "etcd-capz-tw80t5-control-plane-rfj2h"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-tw80t5-control-plane-rfj2h, container kube-apiserver
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-f4xcp, container csi-resizer
STEP: Collecting events for Pod kube-system/calico-node-kp7np
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-78f78cfdd5-67k9b
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-67k9b, container csi-provisioner
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-67k9b, container liveness-probe
... skipping 35 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-ggbg7, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-ggbg7
STEP: Creating log watcher for controller kube-system/kube-proxy-ndcq5, container kube-proxy
STEP: Collecting events for Pod kube-system/metrics-server-7d674f87b8-jd8w9
STEP: Collecting events for Pod kube-system/coredns-6d4b75cb6d-vz8fl
STEP: Collecting events for Pod kube-system/csi-azurefile-controller-78f78cfdd5-f4xcp
STEP: failed to find events of Pod "kube-scheduler-capz-tw80t5-control-plane-rfj2h"
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-f4xcp, container azurefile
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-9w87g, container liveness-probe
STEP: failed to find events of Pod "kube-apiserver-capz-tw80t5-control-plane-rfj2h"
STEP: failed to find events of Pod "kube-controller-manager-capz-tw80t5-control-plane-rfj2h"
STEP: Fetching activity logs took 2.460484013s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-tw80t5" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...