This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-03-10 20:59
Elapsed59m6s
Revisionrelease-1.8

No Test Failures!


Error lines from build-log.txt

... skipping 787 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 143 lines ...
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.25.6 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
Unable to connect to the server: dial tcp 20.103.155.128:6443: i/o timeout
capz-k9b0el-control-plane-gfrn9   NotReady   <none>   1s    v1.23.18-rc.0.1+500bcf6c2b6f54
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.25.6 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
namespace/calico-system created
Error from server (NotFound): configmaps "kubeadm-config" not found
configmap/kubeadm-config created
Installing Calico CNI via helm
Cluster CIDR is IPv4
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
Release "calico" does not exist. Installing it now.
NAME: calico
LAST DEPLOYED: Fri Mar 10 21:11:46 2023
NAMESPACE: calico-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
error: the server doesn't have a resource type "kubeadmcontrolplane"
CCM cluster CIDR: 192.168.0.0/16
Installing cloud-provider-azure components via helm
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
Release "cloud-provider-azure" does not exist. Installing it now.
NAME: cloud-provider-azure
... skipping 147 lines ...
  << End Captured GinkgoWriter Output

  test case is only available for CSI drivers
  In [It] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289

  There were additional failures detected after the initial failure:
    [FAILED]
    create volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request
    In [AfterEach] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74
------------------------------
Pre-Provisioned
  should use a pre-provisioned volume and mount it by multiple pods [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:117
STEP: Creating a kubernetes client 03/10/23 21:17:52.9
... skipping 26 lines ...
  << End Captured GinkgoWriter Output

  test case is only available for CSI drivers
  In [It] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289

  There were additional failures detected after the initial failure:
    [FAILED]
    create volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request
    In [AfterEach] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74
------------------------------
Pre-Provisioned
  should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:158
STEP: Creating a kubernetes client 03/10/23 21:17:54.217
... skipping 26 lines ...
  << End Captured GinkgoWriter Output

  test case is only available for CSI drivers
  In [It] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289

  There were additional failures detected after the initial failure:
    [FAILED]
    create volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request
    In [AfterEach] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74
------------------------------
Pre-Provisioned
  should use existing credentials in k8s cluster [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:186
STEP: Creating a kubernetes client 03/10/23 21:17:55.533
... skipping 26 lines ...
  << End Captured GinkgoWriter Output

  test case is only available for CSI drivers
  In [It] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289

  There were additional failures detected after the initial failure:
    [FAILED]
    create volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request
    In [AfterEach] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74
------------------------------
Pre-Provisioned
  should use provided credentials [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:230
STEP: Creating a kubernetes client 03/10/23 21:17:56.873
... skipping 26 lines ...
  << End Captured GinkgoWriter Output

  test case is only available for CSI drivers
  In [It] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289

  There were additional failures detected after the initial failure:
    [FAILED]
    create volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request
    In [AfterEach] at: /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74
------------------------------
Dynamic Provisioning
  should create a storage account with tags [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:75
STEP: Creating a kubernetes client 03/10/23 21:17:58.164
... skipping 187 lines ...
Mar 10 21:22:50.427: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
Mar 10 21:22:52.535: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
Mar 10 21:22:54.645: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
Mar 10 21:22:56.753: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
Mar 10 21:22:58.863: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
Mar 10 21:23:00.972: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
Mar 10 21:23:02.973: INFO: Unexpected error: 
    <*errors.errorString | 0xc000543400>: {
        s: "PersistentVolumeClaims [pvc-mhh49] not all in phase Bound within 5m0s",
    }
Mar 10 21:23:02.973: FAIL: PersistentVolumeClaims [pvc-mhh49] not all in phase Bound within 5m0s

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000885c70, {0x2896668?, 0xc000103520}, 0xc000c42f20, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_cmd_volume_tester.go:41 +0xed
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.3()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:149 +0x5f5
STEP: dump namespace information after failure 03/10/23 21:23:02.974
STEP: Destroying namespace "azurefile-8317" for this suite. 03/10/23 21:23:02.974
------------------------------
• [FAILED] [303.620 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43
  [It] should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:106

  Begin Captured GinkgoWriter Output >>
... skipping 149 lines ...
    Mar 10 21:22:50.427: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
    Mar 10 21:22:52.535: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
    Mar 10 21:22:54.645: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
    Mar 10 21:22:56.753: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
    Mar 10 21:22:58.863: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
    Mar 10 21:23:00.972: INFO: PersistentVolumeClaim pvc-mhh49 found but phase is Pending instead of Bound.
    Mar 10 21:23:02.973: INFO: Unexpected error: 
        <*errors.errorString | 0xc000543400>: {
            s: "PersistentVolumeClaims [pvc-mhh49] not all in phase Bound within 5m0s",
        }
    Mar 10 21:23:02.973: FAIL: PersistentVolumeClaims [pvc-mhh49] not all in phase Bound within 5m0s

    Full Stack Trace
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000885c70, {0x2896668?, 0xc000103520}, 0xc000c42f20, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...

  There were additional failures detected after the initial failure:
    [PANICKED]
    Test Panicked
    In [DeferCleanup (Each)] at: /usr/local/go/src/runtime/panic.go:260

    runtime error: invalid memory address or nil pointer dereference

    Full Stack Trace
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0004e62d0)
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 231 lines ...
Mar 10 21:27:56.297: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
Mar 10 21:27:58.406: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
Mar 10 21:28:00.515: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
Mar 10 21:28:02.625: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
Mar 10 21:28:04.735: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
Mar 10 21:28:06.844: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
Mar 10 21:28:08.845: INFO: Unexpected error: 
    <*errors.errorString | 0xc0000f4d20>: {
        s: "PersistentVolumeClaims [pvc-mlnmg] not all in phase Bound within 5m0s",
    }
Mar 10 21:28:08.845: FAIL: PersistentVolumeClaims [pvc-mlnmg] not all in phase Bound within 5m0s

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0008dbc90, {0x2896668?, 0xc0000ff040}, 0xc0006cb760, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_collocated_pod_tester.go:40 +0x153
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.6()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:273 +0x5ed
STEP: dump namespace information after failure 03/10/23 21:28:08.846
STEP: Destroying namespace "azurefile-1279" for this suite. 03/10/23 21:28:08.846
------------------------------
• [FAILED] [303.229 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43
  [It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:224

  Begin Captured GinkgoWriter Output >>
... skipping 149 lines ...
    Mar 10 21:27:56.297: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
    Mar 10 21:27:58.406: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
    Mar 10 21:28:00.515: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
    Mar 10 21:28:02.625: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
    Mar 10 21:28:04.735: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
    Mar 10 21:28:06.844: INFO: PersistentVolumeClaim pvc-mlnmg found but phase is Pending instead of Bound.
    Mar 10 21:28:08.845: INFO: Unexpected error: 
        <*errors.errorString | 0xc0000f4d20>: {
            s: "PersistentVolumeClaims [pvc-mlnmg] not all in phase Bound within 5m0s",
        }
    Mar 10 21:28:08.845: FAIL: PersistentVolumeClaims [pvc-mlnmg] not all in phase Bound within 5m0s

    Full Stack Trace
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0008dbc90, {0x2896668?, 0xc0000ff040}, 0xc0006cb760, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...

  There were additional failures detected after the initial failure:
    [PANICKED]
    Test Panicked
    In [DeferCleanup (Each)] at: /usr/local/go/src/runtime/panic.go:260

    runtime error: invalid memory address or nil pointer dereference

    Full Stack Trace
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0004e62d0)
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 161 lines ...
Mar 10 21:32:59.752: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
Mar 10 21:33:01.862: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
Mar 10 21:33:03.971: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
Mar 10 21:33:06.081: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
Mar 10 21:33:08.192: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
Mar 10 21:33:10.303: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
Mar 10 21:33:12.304: INFO: Unexpected error: 
    <*errors.errorString | 0xc0005347c0>: {
        s: "PersistentVolumeClaims [pvc-ttndg] not all in phase Bound within 5m0s",
    }
Mar 10 21:33:12.305: FAIL: PersistentVolumeClaims [pvc-ttndg] not all in phase Bound within 5m0s

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000a17be0, {0x2896668?, 0xc000683040}, 0xc0006db8c0, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_read_only_volume_tester.go:48 +0x13c
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.7()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:308 +0x365
STEP: dump namespace information after failure 03/10/23 21:33:12.306
STEP: Destroying namespace "azurefile-8754" for this suite. 03/10/23 21:33:12.306
------------------------------
• [FAILED] [303.460 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43
  [It] should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:277

  Begin Captured GinkgoWriter Output >>
... skipping 149 lines ...
    Mar 10 21:32:59.752: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
    Mar 10 21:33:01.862: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
    Mar 10 21:33:03.971: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
    Mar 10 21:33:06.081: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
    Mar 10 21:33:08.192: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
    Mar 10 21:33:10.303: INFO: PersistentVolumeClaim pvc-ttndg found but phase is Pending instead of Bound.
    Mar 10 21:33:12.304: INFO: Unexpected error: 
        <*errors.errorString | 0xc0005347c0>: {
            s: "PersistentVolumeClaims [pvc-ttndg] not all in phase Bound within 5m0s",
        }
    Mar 10 21:33:12.305: FAIL: PersistentVolumeClaims [pvc-ttndg] not all in phase Bound within 5m0s

    Full Stack Trace
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000a17be0, {0x2896668?, 0xc000683040}, 0xc0006db8c0, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...

  There were additional failures detected after the initial failure:
    [PANICKED]
    Test Panicked
    In [DeferCleanup (Each)] at: /usr/local/go/src/runtime/panic.go:260

    runtime error: invalid memory address or nil pointer dereference

    Full Stack Trace
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0004e62d0)
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 161 lines ...
Mar 10 21:38:03.187: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
Mar 10 21:38:05.297: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
Mar 10 21:38:07.407: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
Mar 10 21:38:09.517: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
Mar 10 21:38:11.628: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
Mar 10 21:38:13.738: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
Mar 10 21:38:15.738: INFO: Unexpected error: 
    <*errors.errorString | 0xc0004b3460>: {
        s: "PersistentVolumeClaims [pvc-pqz9m] not all in phase Bound within 5m0s",
    }
Mar 10 21:38:15.739: FAIL: PersistentVolumeClaims [pvc-pqz9m] not all in phase Bound within 5m0s

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*PodDetails).SetupDeployment(0xc000a13ea8, {0x2896668?, 0xc0000ff040}, 0xc000d36000, {0x7f12c4418a50, 0xc00013ae70}, 0x7f12ee3bb5b8?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:185 +0x495
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedDeletePodTest).Run(0xc000a13e98, {0x2896668?, 0xc0000ff040?}, 0x10?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_delete_pod_tester.go:45 +0x55
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.8()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:345 +0x434
STEP: dump namespace information after failure 03/10/23 21:38:15.739
STEP: Destroying namespace "azurefile-3281" for this suite. 03/10/23 21:38:15.74
------------------------------
• [FAILED] [303.431 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43
  [It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:311

  Begin Captured GinkgoWriter Output >>
... skipping 149 lines ...
    Mar 10 21:38:03.187: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
    Mar 10 21:38:05.297: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
    Mar 10 21:38:07.407: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
    Mar 10 21:38:09.517: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
    Mar 10 21:38:11.628: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
    Mar 10 21:38:13.738: INFO: PersistentVolumeClaim pvc-pqz9m found but phase is Pending instead of Bound.
    Mar 10 21:38:15.738: INFO: Unexpected error: 
        <*errors.errorString | 0xc0004b3460>: {
            s: "PersistentVolumeClaims [pvc-pqz9m] not all in phase Bound within 5m0s",
        }
    Mar 10 21:38:15.739: FAIL: PersistentVolumeClaims [pvc-pqz9m] not all in phase Bound within 5m0s

    Full Stack Trace
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*PodDetails).SetupDeployment(0xc000a13ea8, {0x2896668?, 0xc0000ff040}, 0xc000d36000, {0x7f12c4418a50, 0xc00013ae70}, 0x7f12ee3bb5b8?)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:185 +0x495
... skipping 10 lines ...

  There were additional failures detected after the initial failure:
    [PANICKED]
    Test Panicked
    In [DeferCleanup (Each)] at: /usr/local/go/src/runtime/panic.go:260

    runtime error: invalid memory address or nil pointer dereference

    Full Stack Trace
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0004e62d0)
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 161 lines ...
Mar 10 21:43:06.540: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
Mar 10 21:43:08.649: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
Mar 10 21:43:10.759: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
Mar 10 21:43:12.868: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
Mar 10 21:43:14.978: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
Mar 10 21:43:17.088: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
Mar 10 21:43:19.089: INFO: Unexpected error: 
    <*errors.errorString | 0xc0004bf940>: {
        s: "PersistentVolumeClaims [pvc-7h4gc] not all in phase Bound within 5m0s",
    }
Mar 10 21:43:19.090: FAIL: PersistentVolumeClaims [pvc-7h4gc] not all in phase Bound within 5m0s

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0009f5d90, {0x2896668?, 0xc0000ff6c0}, 0xc000b6de40, {0x7f12c4418a50, 0xc00013ae70}, 0xc000c06200?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedReclaimPolicyTest).Run(0xc0009f5ef8, {0x2896668, 0xc0000ff6c0}, 0x7?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_reclaim_policy_tester.go:38 +0xd9
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.9()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:369 +0x285
STEP: dump namespace information after failure 03/10/23 21:43:19.09
STEP: Destroying namespace "azurefile-1826" for this suite. 03/10/23 21:43:19.09
------------------------------
• [FAILED] [303.352 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43
  [It] should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:348

  Begin Captured GinkgoWriter Output >>
... skipping 149 lines ...
    Mar 10 21:43:06.540: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
    Mar 10 21:43:08.649: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
    Mar 10 21:43:10.759: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
    Mar 10 21:43:12.868: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
    Mar 10 21:43:14.978: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
    Mar 10 21:43:17.088: INFO: PersistentVolumeClaim pvc-7h4gc found but phase is Pending instead of Bound.
    Mar 10 21:43:19.089: INFO: Unexpected error: 
        <*errors.errorString | 0xc0004bf940>: {
            s: "PersistentVolumeClaims [pvc-7h4gc] not all in phase Bound within 5m0s",
        }
    Mar 10 21:43:19.090: FAIL: PersistentVolumeClaims [pvc-7h4gc] not all in phase Bound within 5m0s

    Full Stack Trace
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0009f5d90, {0x2896668?, 0xc0000ff6c0}, 0xc000b6de40, {0x7f12c4418a50, 0xc00013ae70}, 0xc000c06200?)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 10 lines ...

  There were additional failures detected after the initial failure:
    [PANICKED]
    Test Panicked
    In [DeferCleanup (Each)] at: /usr/local/go/src/runtime/panic.go:260

    runtime error: invalid memory address or nil pointer dereference

    Full Stack Trace
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0004e62d0)
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 198 lines ...
Mar 10 21:48:13.316: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
Mar 10 21:48:15.426: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
Mar 10 21:48:17.536: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
Mar 10 21:48:19.645: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
Mar 10 21:48:21.756: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
Mar 10 21:48:23.869: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
Mar 10 21:48:25.870: INFO: Unexpected error: 
    <*errors.errorString | 0xc0005b41e0>: {
        s: "PersistentVolumeClaims [pvc-5vhgt] not all in phase Bound within 5m0s",
    }
Mar 10 21:48:25.871: FAIL: PersistentVolumeClaims [pvc-5vhgt] not all in phase Bound within 5m0s

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0006d3788, {0x2896668?, 0xc0000ff860}, 0xc0005e5e40, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_resize_volume_tester.go:64 +0x10c
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.11()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:426 +0x2f5
STEP: dump namespace information after failure 03/10/23 21:48:25.871
STEP: Destroying namespace "azurefile-6378" for this suite. 03/10/23 21:48:25.872
------------------------------
• [FAILED] [303.339 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43
  [It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:397

  Begin Captured GinkgoWriter Output >>
... skipping 149 lines ...
    Mar 10 21:48:13.316: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
    Mar 10 21:48:15.426: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
    Mar 10 21:48:17.536: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
    Mar 10 21:48:19.645: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
    Mar 10 21:48:21.756: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
    Mar 10 21:48:23.869: INFO: PersistentVolumeClaim pvc-5vhgt found but phase is Pending instead of Bound.
    Mar 10 21:48:25.870: INFO: Unexpected error: 
        <*errors.errorString | 0xc0005b41e0>: {
            s: "PersistentVolumeClaims [pvc-5vhgt] not all in phase Bound within 5m0s",
        }
    Mar 10 21:48:25.871: FAIL: PersistentVolumeClaims [pvc-5vhgt] not all in phase Bound within 5m0s

    Full Stack Trace
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
    sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0006d3788, {0x2896668?, 0xc0000ff860}, 0xc0005e5e40, {0x7f12c4418a50, 0xc00013ae70}, 0x0?)
    	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...

  There were additional failures detected after the initial failure:
    [PANICKED]
    Test Panicked
    In [DeferCleanup (Each)] at: /usr/local/go/src/runtime/panic.go:260

    runtime error: invalid memory address or nil pointer dereference

    Full Stack Trace
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
      k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0004e62d0)
      	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 2622 lines ...
I0310 21:11:27.881008       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1678482687\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1678482687\" (2023-03-10 20:11:27 +0000 UTC to 2024-03-09 20:11:27 +0000 UTC (now=2023-03-10 21:11:27.880913507 +0000 UTC))"
I0310 21:11:27.881155       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1678482687\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1678482687\" (2023-03-10 20:11:27 +0000 UTC to 2024-03-09 20:11:27 +0000 UTC (now=2023-03-10 21:11:27.881135015 +0000 UTC))"
I0310 21:11:27.881182       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0310 21:11:27.881353       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0310 21:11:27.881921       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0310 21:11:27.882014       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0310 21:11:31.797425       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0310 21:11:31.797531       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0310 21:11:34.591350       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0310 21:11:34.591578       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-k9b0el-control-plane-gfrn9_1da46176-9396-4f44-8954-11c811421b6b became leader"
I0310 21:11:34.696665       1 request.go:617] Waited for 96.759752ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/flowcontrol.apiserver.k8s.io/v1beta2
I0310 21:11:34.698935       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0310 21:11:34.698936       1 controllermanager.go:576] Starting "podgc"
I0310 21:11:34.699174       1 reflector.go:219] Starting reflector *v1.Secret (19h25m26.549207731s) from k8s.io/client-go/informers/factory.go:134
... skipping 36 lines ...
I0310 21:11:34.788025       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0310 21:11:34.788040       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0310 21:11:34.788054       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0310 21:11:34.788063       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0310 21:11:34.788079       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0310 21:11:34.788087       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0310 21:11:34.788122       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0310 21:11:34.788131       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0310 21:11:34.788182       1 controllermanager.go:605] Started "persistentvolume-binder"
I0310 21:11:34.788195       1 controllermanager.go:576] Starting "attachdetach"
I0310 21:11:34.788309       1 pv_controller_base.go:310] Starting persistent volume controller
I0310 21:11:34.788321       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0310 21:11:34.799760       1 shared_informer.go:270] caches populated
... skipping 5 lines ...
I0310 21:11:34.810194       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0310 21:11:34.810296       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0310 21:11:34.811015       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0310 21:11:34.811170       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0310 21:11:34.811322       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0310 21:11:34.811426       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0310 21:11:34.811544       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0310 21:11:34.811642       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0310 21:11:34.811864       1 controllermanager.go:605] Started "attachdetach"
I0310 21:11:34.812695       1 controllermanager.go:576] Starting "replicationcontroller"
I0310 21:11:34.813013       1 attach_detach_controller.go:328] Starting attach detach controller
I0310 21:11:34.813727       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0310 21:11:34.849517       1 controllermanager.go:605] Started "replicationcontroller"
... skipping 271 lines ...
I0310 21:11:38.644930       1 shared_informer.go:247] Caches are synced for service account 
I0310 21:11:38.649816       1 shared_informer.go:270] caches populated
I0310 21:11:38.651487       1 shared_informer.go:247] Caches are synced for ReplicationController 
I0310 21:11:38.650515       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-control-plane-gfrn9"
I0310 21:11:38.650523       1 shared_informer.go:270] caches populated
I0310 21:11:38.651451       1 shared_informer.go:270] caches populated
W0310 21:11:38.652289       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-k9b0el-control-plane-gfrn9" does not exist
I0310 21:11:38.652478       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I0310 21:11:38.653136       1 shared_informer.go:247] Caches are synced for TTL 
I0310 21:11:38.656157       1 shared_informer.go:270] caches populated
I0310 21:11:38.656422       1 shared_informer.go:247] Caches are synced for PV protection 
I0310 21:11:38.657216       1 shared_informer.go:270] caches populated
I0310 21:11:38.657358       1 shared_informer.go:247] Caches are synced for job 
... skipping 302 lines ...
I0310 21:11:39.743340       1 controller_utils.go:206] Controller kube-system/coredns-bd6b6df9f either never recorded expectations, or the ttl expired.
I0310 21:11:39.743469       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/coredns-bd6b6df9f", timestamp:time.Time{wall:0xc0fb0522ec50640a, ext:12807250186, loc:(*time.Location)(0x72c0b80)}}
I0310 21:11:39.743595       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-bd6b6df9f" need=2 creating=2
I0310 21:11:39.760048       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0310 21:11:39.765133       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-10 21:11:39.742143657 +0000 UTC m=+12.805927849 - now: 2023-03-10 21:11:39.765125292 +0000 UTC m=+12.828909384]
I0310 21:11:39.778258       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.018302801s"
I0310 21:11:39.778438       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:39.778578       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-10 21:11:39.778564263 +0000 UTC m=+12.842348355"
I0310 21:11:39.779241       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-10 21:11:39 +0000 UTC - now: 2023-03-10 21:11:39.779236082 +0000 UTC m=+12.843020274]
I0310 21:11:39.798249       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="19.656143ms"
I0310 21:11:39.798470       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-10 21:11:39.798453413 +0000 UTC m=+12.862237505"
I0310 21:11:39.799314       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-10 21:11:39 +0000 UTC - now: 2023-03-10 21:11:39.799216234 +0000 UTC m=+12.863000326]
I0310 21:11:39.799761       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0310 21:11:39.810968       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="12.486945ms"
I0310 21:11:39.811170       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:39.811320       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-10 21:11:39.811304568 +0000 UTC m=+12.875088660"
I0310 21:11:39.811978       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-10 21:11:39 +0000 UTC - now: 2023-03-10 21:11:39.811971886 +0000 UTC m=+12.875756078]
I0310 21:11:39.812165       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0310 21:11:39.812376       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.060529ms"
I0310 21:11:39.816704       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-10 21:11:39.816645115 +0000 UTC m=+12.880429207"
I0310 21:11:39.817367       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-10 21:11:39 +0000 UTC - now: 2023-03-10 21:11:39.817361535 +0000 UTC m=+12.881145727]
... skipping 164 lines ...
I0310 21:11:50.417838       1 event.go:294] "Event occurred" object="calico-system/tigera-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set tigera-operator-6bbf97c9cf to 1"
I0310 21:11:50.418026       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"calico-system/tigera-operator-6bbf97c9cf", timestamp:time.Time{wall:0xc0fb052598ea849d, ext:23481806749, loc:(*time.Location)(0x72c0b80)}}
I0310 21:11:50.418267       1 replica_set.go:563] "Too few replicas" replicaSet="calico-system/tigera-operator-6bbf97c9cf" need=1 creating=1
I0310 21:11:50.425626       1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/tigera-operator"
I0310 21:11:50.426762       1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-10 21:11:50.417358146 +0000 UTC m=+23.481142238 - now: 2023-03-10 21:11:50.426737408 +0000 UTC m=+23.490521600]
I0310 21:11:50.434676       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="28.618594ms"
I0310 21:11:50.434816       1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/tigera-operator" err="Operation cannot be fulfilled on deployments.apps \"tigera-operator\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:50.434927       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-10 21:11:50.434912649 +0000 UTC m=+23.498696741"
I0310 21:11:50.435296       1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-10 21:11:50 +0000 UTC - now: 2023-03-10 21:11:50.435290756 +0000 UTC m=+23.499074848]
I0310 21:11:50.435602       1 disruption.go:415] addPod called on pod "tigera-operator-6bbf97c9cf-5d44w"
I0310 21:11:50.435630       1 disruption.go:490] No PodDisruptionBudgets found for pod tigera-operator-6bbf97c9cf-5d44w, PodDisruptionBudget controller will avoid syncing.
I0310 21:11:50.435636       1 disruption.go:418] No matching pdb for pod "tigera-operator-6bbf97c9cf-5d44w"
I0310 21:11:50.435669       1 replica_set.go:380] Pod tigera-operator-6bbf97c9cf-5d44w created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"tigera-operator-6bbf97c9cf-5d44w", GenerateName:"tigera-operator-6bbf97c9cf-", Namespace:"calico-system", SelfLink:"", UID:"300344dc-38dd-44c4-b04d-f9ba9de91705", ResourceVersion:"529", Generation:0, CreationTimestamp:time.Date(2023, time.March, 10, 21, 11, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"tigera-operator", "name":"tigera-operator", "pod-template-hash":"6bbf97c9cf"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"tigera-operator-6bbf97c9cf", UID:"be8346a1-2424-46d6-aa5b-b805a8f9549d", Controller:(*bool)(0xc0011223b7), BlockOwnerDeletion:(*bool)(0xc0011223b8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 10, 21, 11, 50, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ab74b8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ab74d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-zqg42", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001a861e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"tigera-operator", Image:"quay.io/tigera/operator:v1.29.0", Command:[]string{"operator"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource{v1.EnvFromSource{Prefix:"", ConfigMapRef:(*v1.ConfigMapEnvSource)(0xc001ab7500), SecretRef:(*v1.SecretEnvSource)(nil)}}, Env:[]v1.EnvVar{v1.EnvVar{Name:"WATCH_NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"POD_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a86260)}, v1.EnvVar{Name:"OPERATOR_NAME", Value:"tigera-operator", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TIGERA_OPERATOR_INIT_IMAGE_VERSION", Value:"v1.29.0", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"var-lib-calico", ReadOnly:true, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-zqg42", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0011224d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirstWithHostNet", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"tigera-operator", DeprecatedServiceAccount:"tigera-operator", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000246e00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00112253c), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001122540), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00143acc0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
... skipping 11 lines ...
I0310 21:11:50.444527       1 controller_utils.go:122] "Update ready status of pods on node" node="capz-k9b0el-control-plane-gfrn9"
I0310 21:11:50.454109       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="19.181631ms"
I0310 21:11:50.454261       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-10 21:11:50.454245383 +0000 UTC m=+23.518029575"
I0310 21:11:50.454653       1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-10 21:11:50 +0000 UTC - now: 2023-03-10 21:11:50.45464759 +0000 UTC m=+23.518431782]
I0310 21:11:50.455014       1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/tigera-operator"
I0310 21:11:50.459637       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="5.380093ms"
I0310 21:11:50.459662       1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/tigera-operator" err="Operation cannot be fulfilled on deployments.apps \"tigera-operator\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:50.459777       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-10 21:11:50.459719878 +0000 UTC m=+23.523504070"
I0310 21:11:50.460201       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/tigera-operator-6bbf97c9cf"
I0310 21:11:50.460219       1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-10 21:11:50 +0000 UTC - now: 2023-03-10 21:11:50.460215086 +0000 UTC m=+23.523999178]
I0310 21:11:50.460545       1 progress.go:195] Queueing up deployment "tigera-operator" for a progress check after 599s
I0310 21:11:50.460681       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="949.716µs"
I0310 21:11:50.460876       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-10 21:11:50.460862597 +0000 UTC m=+23.524646789"
... skipping 148 lines ...
I0310 21:11:56.947845       1 replica_set.go:380] Pod cloud-controller-manager-84fcc5997b-nn677 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cloud-controller-manager-84fcc5997b-nn677", GenerateName:"cloud-controller-manager-84fcc5997b-", Namespace:"kube-system", SelfLink:"", UID:"91635c24-6702-4d44-95b1-2ac45fdde2b2", ResourceVersion:"585", Generation:0, CreationTimestamp:time.Date(2023, time.March, 10, 21, 11, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"component":"cloud-controller-manager", "pod-template-hash":"84fcc5997b", "tier":"control-plane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"cloud-controller-manager-84fcc5997b", UID:"a003e852-a3fb-428c-93d8-3a613801a6c3", Controller:(*bool)(0xc001cd5897), BlockOwnerDeletion:(*bool)(0xc001cd5898)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 10, 21, 11, 56, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e54060), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"etc-kubernetes", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e54078), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"ssl-mount", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e540c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"msi", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e540d8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-2fm4x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000433a40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"cloud-controller-manager", Image:"capzci.azurecr.io/azure-cloud-controller-manager:cd50a1b", Command:[]string{"cloud-controller-manager"}, Args:[]string{"--allocate-node-cidrs=true", "--cloud-config=/etc/kubernetes/azure.json", "--cloud-config-secret-name=", "--cloud-provider=azure", "--cluster-cidr=192.168.0.0/16", "--cluster-name=capz-k9b0el", "--configure-cloud-routes=true", "--controllers=*,-cloud-node", "--enable-dynamic-reloading=false", "--leader-elect=true", "--route-reconciliation-period=10s", "--secure-port=10268", "--v=4"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:4, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"4", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:134217728, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"etc-kubernetes", ReadOnly:false, MountPath:"/etc/kubernetes", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"ssl-mount", ReadOnly:true, MountPath:"/etc/ssl", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"msi", ReadOnly:true, MountPath:"/var/lib/waagent/ManagedIdentity-Settings", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-2fm4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0016d6700), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001cd59c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"node-role.kubernetes.io/control-plane":""}, ServiceAccountName:"cloud-controller-manager", DeprecatedServiceAccount:"cloud-controller-manager", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00047ac40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/etcd", Operator:"", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cd5a20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cd5a40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(0xc001cd5a48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001cd5a4c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0020d73a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint{v1.TopologySpreadConstraint{MaxSkew:1, TopologyKey:"kubernetes.io/hostname", WhenUnsatisfiable:"DoNotSchedule", LabelSelector:(*v1.LabelSelector)(0xc000433ac0)}}, SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0310 21:11:56.948115       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-controller-manager-84fcc5997b", timestamp:time.Time{wall:0xc0fb0527375cf782, ext:29992623746, loc:(*time.Location)(0x72c0b80)}}
I0310 21:11:56.948160       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/cloud-controller-manager-84fcc5997b-nn677" podUID=91635c24-6702-4d44-95b1-2ac45fdde2b2
I0310 21:11:56.948231       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/cloud-controller-manager-84fcc5997b-nn677"
I0310 21:11:56.948248       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/cloud-controller-manager"
I0310 21:11:56.949799       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/cloud-controller-manager" duration="33.687312ms"
I0310 21:11:56.949896       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/cloud-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"cloud-controller-manager\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:56.949982       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/cloud-controller-manager" startTime="2023-03-10 21:11:56.949970038 +0000 UTC m=+30.013754230"
I0310 21:11:56.950642       1 deployment_util.go:775] Deployment "cloud-controller-manager" timed out (false) [last progress check: 2023-03-10 21:11:56 +0000 UTC - now: 2023-03-10 21:11:56.95063695 +0000 UTC m=+30.014421042]
I0310 21:11:56.955081       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/cloud-controller-manager-84fcc5997b-nn677" podUID=91635c24-6702-4d44-95b1-2ac45fdde2b2
I0310 21:11:56.955255       1 disruption.go:427] updatePod called on pod "cloud-controller-manager-84fcc5997b-nn677"
I0310 21:11:56.955182       1 replica_set.go:443] Pod cloud-controller-manager-84fcc5997b-nn677 updated, objectMeta {Name:cloud-controller-manager-84fcc5997b-nn677 GenerateName:cloud-controller-manager-84fcc5997b- Namespace:kube-system SelfLink: UID:91635c24-6702-4d44-95b1-2ac45fdde2b2 ResourceVersion:585 Generation:0 CreationTimestamp:2023-03-10 21:11:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:cloud-controller-manager pod-template-hash:84fcc5997b tier:control-plane] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:cloud-controller-manager-84fcc5997b UID:a003e852-a3fb-428c-93d8-3a613801a6c3 Controller:0xc001cd5897 BlockOwnerDeletion:0xc001cd5898}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:component":{},"f:pod-template-hash":{},"f:tier":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a003e852-a3fb-428c-93d8-3a613801a6c3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"cloud-controller-manager\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/ssl\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/lib/waagent/ManagedIdentity-Settings\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:topologySpreadConstraints":{".":{},"k:{\"topologyKey\":\"kubernetes.io/hostname\",\"whenUnsatisfiable\":\"DoNotSchedule\"}":{".":{},"f:labelSelector":{},"f:maxSkew":{},"f:topologyKey":{},"f:whenUnsatisfiable":{}}},"f:volumes":{".":{},"k:{\"name\":\"etc-kubernetes\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"msi\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"ssl-mount\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:}]} -> {Name:cloud-controller-manager-84fcc5997b-nn677 GenerateName:cloud-controller-manager-84fcc5997b- Namespace:kube-system SelfLink: UID:91635c24-6702-4d44-95b1-2ac45fdde2b2 ResourceVersion:586 Generation:0 CreationTimestamp:2023-03-10 21:11:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:cloud-controller-manager pod-template-hash:84fcc5997b tier:control-plane] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:cloud-controller-manager-84fcc5997b UID:a003e852-a3fb-428c-93d8-3a613801a6c3 Controller:0xc001eca367 BlockOwnerDeletion:0xc001eca368}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:component":{},"f:pod-template-hash":{},"f:tier":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a003e852-a3fb-428c-93d8-3a613801a6c3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"cloud-controller-manager\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/ssl\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/lib/waagent/ManagedIdentity-Settings\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:topologySpreadConstraints":{".":{},"k:{\"topologyKey\":\"kubernetes.io/hostname\",\"whenUnsatisfiable\":\"DoNotSchedule\"}":{".":{},"f:labelSelector":{},"f:maxSkew":{},"f:topologyKey":{},"f:whenUnsatisfiable":{}}},"f:volumes":{".":{},"k:{\"name\":\"etc-kubernetes\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"msi\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"ssl-mount\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-10 21:11:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0310 21:11:56.955447       1 disruption.go:490] No PodDisruptionBudgets found for pod cloud-controller-manager-84fcc5997b-nn677, PodDisruptionBudget controller will avoid syncing.
... skipping 133 lines ...
I0310 21:11:59.107354       1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (28.601µs)
I0310 21:11:59.122714       1 garbagecollector.go:468] "Processing object" object="calico-system/calico-typha" objectUID=8141efc4-343d-4e4f-bd2d-2c54cc4cff4d kind="Deployment" virtual=false
I0310 21:11:59.122745       1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-c89c74f79" (54.26436ms)
I0310 21:11:59.122790       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-c89c74f79", timestamp:time.Time{wall:0xc0fb0527c4159145, ext:32132306401, loc:(*time.Location)(0x72c0b80)}}
I0310 21:11:59.122876       1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-c89c74f79, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0310 21:11:59.123196       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="70.834454ms"
I0310 21:11:59.123215       1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-typha" err="Operation cannot be fulfilled on deployments.apps \"calico-typha\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:59.123241       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-10 21:11:59.123228076 +0000 UTC m=+32.187012268"
I0310 21:11:59.123589       1 disruption.go:427] updatePod called on pod "calico-typha-c89c74f79-vqwk7"
I0310 21:11:59.123612       1 disruption.go:433] updatePod "calico-typha-c89c74f79-vqwk7" -> PDB "calico-typha"
I0310 21:11:59.123744       1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (20.8µs)
I0310 21:11:59.123896       1 replica_set.go:443] Pod calico-typha-c89c74f79-vqwk7 updated, objectMeta {Name:calico-typha-c89c74f79-vqwk7 GenerateName:calico-typha-c89c74f79- Namespace:calico-system SelfLink: UID:6e8929b7-f1c4-4061-bfe1-b8b1112c9dc0 ResourceVersion:626 Generation:0 CreationTimestamp:2023-03-10 21:11:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:c89c74f79] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:ac4f54e7e99fefc6bf87494fdc87f3c5bcae9fd1 hash.operator.tigera.io/typha-certs:c8e92be4b7bcc1a48504b1592bcb42eec3ba5567] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-c89c74f79 UID:c2e74aaa-2339-43c1-83b5-38f587c50e10 Controller:0xc0013249d7 BlockOwnerDeletion:0xc0013249d8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2e74aaa-2339-43c1-83b5-38f587c50e10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]} -> {Name:calico-typha-c89c74f79-vqwk7 GenerateName:calico-typha-c89c74f79- Namespace:calico-system SelfLink: UID:6e8929b7-f1c4-4061-bfe1-b8b1112c9dc0 ResourceVersion:631 Generation:0 CreationTimestamp:2023-03-10 21:11:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:c89c74f79] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:ac4f54e7e99fefc6bf87494fdc87f3c5bcae9fd1 hash.operator.tigera.io/typha-certs:c8e92be4b7bcc1a48504b1592bcb42eec3ba5567] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-c89c74f79 UID:c2e74aaa-2339-43c1-83b5-38f587c50e10 Controller:0xc001122780 BlockOwnerDeletion:0xc001122781}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2e74aaa-2339-43c1-83b5-38f587c50e10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]}.
I0310 21:11:59.124062       1 taint_manager.go:401] "Noticed pod update" pod="calico-system/calico-typha-c89c74f79-vqwk7"
... skipping 183 lines ...
I0310 21:11:59.689919       1 garbagecollector.go:468] "Processing object" object="calico-system/calico-typha" objectUID=8141efc4-343d-4e4f-bd2d-2c54cc4cff4d kind="Deployment" virtual=false
I0310 21:11:59.695746       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-kube-controllers-fb49b9cf7"
I0310 21:11:59.696020       1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-kube-controllers-fb49b9cf7" (64.054233ms)
I0310 21:11:59.696054       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-kube-controllers-fb49b9cf7", timestamp:time.Time{wall:0xc0fb0527e5ad0b2b, ext:32695881771, loc:(*time.Location)(0x72c0b80)}}
I0310 21:11:59.696161       1 replica_set_utils.go:59] Updating status for : calico-system/calico-kube-controllers-fb49b9cf7, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0310 21:11:59.696427       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-kube-controllers" duration="80.259919ms"
I0310 21:11:59.696595       1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:11:59.696768       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-kube-controllers" startTime="2023-03-10 21:11:59.696706422 +0000 UTC m=+32.760490614"
I0310 21:11:59.697270       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2023-03-10 21:11:59 +0000 UTC - now: 2023-03-10 21:11:59.697264932 +0000 UTC m=+32.761049024]
I0310 21:11:59.701183       1 request.go:617] Waited for 216.885837ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/calico-system/serviceaccounts/csi-node-driver
I0310 21:11:59.716597       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-kube-controllers-fb49b9cf7"
I0310 21:11:59.716813       1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-kube-controllers-fb49b9cf7" (20.760367ms)
I0310 21:11:59.716940       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-kube-controllers-fb49b9cf7", timestamp:time.Time{wall:0xc0fb0527e5ad0b2b, ext:32695881771, loc:(*time.Location)(0x72c0b80)}}
... skipping 131 lines ...
I0310 21:12:04.712588       1 daemon_controller.go:1029] Pods to delete for daemon set cloud-node-manager: [], deleting 0
I0310 21:12:04.712600       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb05292a7807ce, ext:37776293582, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:04.712658       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb05292a7a4416, ext:37776439986, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:04.712673       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [], creating 0
I0310 21:12:04.712771       1 daemon_controller.go:1029] Pods to delete for daemon set cloud-node-manager: [], deleting 0
I0310 21:12:04.712811       1 daemon_controller.go:1112] Updating daemon set status
E0310 21:12:04.713375       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:04.713395       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:04.716791       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:04.737003       1 daemon_controller.go:247] Updating daemon set cloud-node-manager
I0310 21:12:04.737344       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/cloud-node-manager" (25.245083ms)
I0310 21:12:04.737786       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb05292a7a4416, ext:37776439986, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:04.737932       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb05292bfb89b5, ext:37801689269, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:04.737997       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [], creating 0
I0310 21:12:04.738079       1 daemon_controller.go:1029] Pods to delete for daemon set cloud-node-manager: [], deleting 0
... skipping 35 lines ...
I0310 21:12:06.310894       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-j759f"
I0310 21:12:06.311056       1 disruption.go:427] updatePod called on pod "kube-proxy-j759f"
I0310 21:12:06.311086       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-j759f, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:06.311092       1 disruption.go:430] No matching pdb for pod "kube-proxy-j759f"
I0310 21:12:06.311403       1 daemon_controller.go:630] Pod kube-proxy-j759f deleted.
I0310 21:12:06.311412       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052990ffc682, ext:39348982046, loc:(*time.Location)(0x72c0b80)}}
E0310 21:12:06.311703       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:06.311720       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:06.311737       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:06.311967       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:06.311975       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:06.311986       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:06.312346       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:06.312357       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:06.312377       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:06.312625       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:06.312632       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:06.312645       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:06.338248       1 garbagecollector.go:468] "Processing object" object="calico-system/calico-kube-controllers" objectUID=86c3f239-f546-481e-a953-8cbf7ed60284 kind="ServiceAccount" virtual=false
I0310 21:12:06.362927       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0310 21:12:06.363675       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (92.268274ms)
I0310 21:12:06.364206       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052990ffc682, ext:39348982046, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:06.364256       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052995b61185, ext:39428037665, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:06.364268       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
... skipping 229 lines ...
I0310 21:12:10.196973       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/metrics-server-85c7d488df-nmcz8" podUID=beaa4a3e-8cb8-4bdd-a0e1-d1d8a5d8b656
I0310 21:12:10.197317       1 endpoints_controller.go:381] Finished syncing service "kube-system/metrics-server" endpoints. (42.301µs)
I0310 21:12:10.197377       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/metrics-server-85c7d488df-nmcz8"
I0310 21:12:10.199308       1 controller_utils.go:581] Controller metrics-server-85c7d488df created pod metrics-server-85c7d488df-nmcz8
I0310 21:12:10.199353       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-85c7d488df, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0310 21:12:10.199549       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="61.270249ms"
I0310 21:12:10.199568       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:12:10.199592       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-10 21:12:10.199580586 +0000 UTC m=+43.263364778"
I0310 21:12:10.199918       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2023-03-10 21:12:10 +0000 UTC - now: 2023-03-10 21:12:10.199914092 +0000 UTC m=+43.263698184]
I0310 21:12:10.200225       1 event.go:294] "Event occurred" object="kube-system/metrics-server-85c7d488df" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-85c7d488df-nmcz8"
I0310 21:12:10.216866       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-85c7d488df"
I0310 21:12:10.218138       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (55.355749ms)
I0310 21:12:10.218280       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0fb052a89b707eb, ext:43226774251, loc:(*time.Location)(0x72c0b80)}}
... skipping 12 lines ...
I0310 21:12:10.234869       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (16.629484ms)
I0310 21:12:10.235108       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0fb052a89b707eb, ext:43226774251, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:10.235142       1 controller_utils.go:938] Ignoring inactive pod kube-system/kube-proxy-j759f in state Running, deletion time 2023-03-10 21:12:36 +0000 UTC
I0310 21:12:10.235180       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (77.602µs)
I0310 21:12:10.234891       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0310 21:12:10.247665       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="20.47235ms"
I0310 21:12:10.247691       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:12:10.247716       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-10 21:12:10.24770491 +0000 UTC m=+43.311489002"
I0310 21:12:10.268234       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=blockaffinities
I0310 21:12:10.270434       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="22.711189ms"
I0310 21:12:10.270466       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-10 21:12:10.2704535 +0000 UTC m=+43.334237592"
I0310 21:12:10.270865       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2023-03-10 21:12:10 +0000 UTC - now: 2023-03-10 21:12:10.270860107 +0000 UTC m=+43.334644199]
I0310 21:12:10.270899       1 progress.go:195] Queueing up deployment "metrics-server" for a progress check after 599s
... skipping 15 lines ...
I0310 21:12:10.676941       1 garbagecollector.go:519] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"policy/v1", Kind:"PodDisruptionBudget", Name:"calico-typha", UID:"927ac73f-7ed9-463d-8656-0400495e3f1c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"calico-system"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"operator.tigera.io/v1", Kind:"Installation", Name:"default", UID:"1f3b7627-9a25-4f2d-ad94-fadfb79b470a", Controller:(*bool)(0xc0016a32e7), BlockOwnerDeletion:(*bool)(0xc0016a32e8)}}, will not garbage collect
I0310 21:12:10.720556       1 disruption.go:427] updatePod called on pod "kube-proxy-j759f"
I0310 21:12:10.720589       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-j759f, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:10.720595       1 disruption.go:430] No matching pdb for pod "kube-proxy-j759f"
I0310 21:12:10.720629       1 daemon_controller.go:630] Pod kube-proxy-j759f deleted.
I0310 21:12:10.720636       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052995b7122b, ext:39428103367, loc:(*time.Location)(0x72c0b80)}}
E0310 21:12:10.720986       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:10.721000       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:10.721019       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:10.721240       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:10.721247       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:10.721259       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:10.721460       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:10.721466       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:10.721479       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:10.721535       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052995b7122b, ext:39428103367, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:10.721595       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aab02a0d1, ext:43785376621, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:10.721606       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:10.721631       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:10.721636       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aab02a0d1, ext:43785376621, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:10.721663       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aab03adf6, ext:43785445622, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:10.721670       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:10.721686       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
E0310 21:12:10.721692       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:10.721700       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:10.721712       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:10.721702       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:12:10.721773       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-j759f" podUID=80dd94ab-9fa5-45aa-a091-5249754eb093
I0310 21:12:10.729283       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0310 21:12:10.731478       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (10.797385ms)
I0310 21:12:10.732012       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aab03adf6, ext:43785445622, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:10.732056       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aaba24068, ext:43795837800, loc:(*time.Location)(0x72c0b80)}}
... skipping 6 lines ...
I0310 21:12:10.732156       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:12:10.732176       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (674.712µs)
I0310 21:12:11.016090       1 disruption.go:427] updatePod called on pod "kube-proxy-j759f"
I0310 21:12:11.016124       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-j759f, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:11.016130       1 disruption.go:430] No matching pdb for pod "kube-proxy-j759f"
I0310 21:12:11.016388       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=pods, namespace kube-system, name kube-proxy-j759f, uid 80dd94ab-9fa5-45aa-a091-5249754eb093, event type update
E0310 21:12:11.016487       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.016510       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.016530       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.016683       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-j759f" podUID=80dd94ab-9fa5-45aa-a091-5249754eb093
I0310 21:12:11.016702       1 daemon_controller.go:630] Pod kube-proxy-j759f deleted.
I0310 21:12:11.016709       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aaba34879, ext:43795905301, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.017887       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052aaba34879, ext:43795905301, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.018005       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac112a62d, ext:44081783597, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.018089       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.018121       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.018328       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac112a62d, ext:44081783597, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.018373       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac118547f, ext:44082155803, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.018382       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.018404       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.018417       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:12:11.018440       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (1.17772ms)
E0310 21:12:11.018702       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.018739       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.018769       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.019063       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.019071       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.019086       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.019377       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.019391       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.019415       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.023287       1 disruption.go:456] deletePod called on pod "kube-proxy-j759f"
I0310 21:12:11.023315       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-j759f, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:11.023321       1 disruption.go:459] No matching pdb for pod "kube-proxy-j759f"
I0310 21:12:11.023782       1 deployment_controller.go:357] "Pod deleted" pod="kube-system/kube-proxy-j759f"
I0310 21:12:11.023816       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=pods, namespace kube-system, name kube-proxy-j759f, uid 80dd94ab-9fa5-45aa-a091-5249754eb093, event type delete
I0310 21:12:11.023845       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-j759f" podUID=80dd94ab-9fa5-45aa-a091-5249754eb093
I0310 21:12:11.023870       1 daemon_controller.go:630] Pod kube-proxy-j759f deleted.
I0310 21:12:11.023877       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac118547f, ext:44082155803, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.024450       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac118547f, ext:44082155803, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.024489       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac175a210, ext:44088270508, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.024499       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-k9b0el-control-plane-gfrn9], creating 1
I0310 21:12:11.025073       1 taint_manager.go:386] "Noticed pod deletion" pod="kube-system/kube-proxy-j759f"
E0310 21:12:11.025277       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.025286       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.025301       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.025621       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.025629       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.025642       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.025877       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.025883       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.025894       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.026596       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.026607       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.026621       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.034370       1 controller_utils.go:581] Controller kube-proxy created pod kube-proxy-kz4sb
I0310 21:12:11.034398       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.034410       1 controller_utils.go:195] Controller still waiting on expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac175a210, ext:44088270508, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.034446       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:12:11.034767       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kz4sb"
I0310 21:12:11.034892       1 disruption.go:415] addPod called on pod "kube-proxy-kz4sb"
... skipping 6 lines ...
I0310 21:12:11.047320       1 disruption.go:427] updatePod called on pod "kube-proxy-kz4sb"
I0310 21:12:11.047357       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-kz4sb, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:11.047363       1 disruption.go:430] No matching pdb for pod "kube-proxy-kz4sb"
I0310 21:12:11.047655       1 daemon_controller.go:570] Pod kube-proxy-kz4sb updated.
I0310 21:12:11.047762       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/kube-proxy-kz4sb"
I0310 21:12:11.047829       1 controller_utils.go:122] "Update ready status of pods on node" node="capz-k9b0el-control-plane-gfrn9"
E0310 21:12:11.048527       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.048543       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.048564       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.049040       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.049050       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.049074       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.049433       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.049460       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.050559       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.053288       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.053300       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.053315       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.073275       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0310 21:12:11.074233       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (50.335658ms)
I0310 21:12:11.074826       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac175a210, ext:44088270508, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.074888       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac476895a, ext:44138661466, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.074898       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.074923       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
... skipping 8 lines ...
I0310 21:12:11.100573       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac5fe8d5b, ext:44164352503, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.100710       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.100893       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.102989       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac5fe8d5b, ext:44164352503, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.101921       1 disruption.go:427] updatePod called on pod "kube-proxy-kz4sb"
I0310 21:12:11.102717       1 daemon_controller.go:570] Pod kube-proxy-kz4sb updated.
E0310 21:12:11.103379       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.103532       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.103555       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.103715       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac62af3f5, ext:44167262353, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.103736       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.103760       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.103798       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:12:11.103828       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (4.009468ms)
E0310 21:12:11.104066       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.104321       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.104344       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:11.104819       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.106163       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
I0310 21:12:11.105195       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac62af3f5, ext:44167262353, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.104252       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-kz4sb, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:11.106413       1 disruption.go:430] No matching pdb for pod "kube-proxy-kz4sb"
E0310 21:12:11.106430       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.106619       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac65ad276, ext:44170399606, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.106635       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.106662       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.106668       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac65ad276, ext:44170399606, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.106696       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052ac65c051b, ext:44170478107, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:11.106704       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:11.106719       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:11.106732       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:12:11.106753       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (2.907549ms)
E0310 21:12:11.106941       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:11.106953       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:11.106973       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:11.141439       1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (3.317657ms)
I0310 21:12:11.283301       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-control-plane-gfrn9"
I0310 21:12:12.731102       1 disruption.go:427] updatePod called on pod "kube-proxy-kz4sb"
I0310 21:12:12.731137       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-kz4sb, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:12.731143       1 disruption.go:430] No matching pdb for pod "kube-proxy-kz4sb"
I0310 21:12:12.731378       1 daemon_controller.go:570] Pod kube-proxy-kz4sb updated.
... skipping 3 lines ...
I0310 21:12:12.732022       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:12.732032       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052b2ba0fb43, ext:45795754563, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:12.732059       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052b2ba24f81, ext:45795841665, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:12.732071       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:12.732092       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0310 21:12:12.732110       1 daemon_controller.go:1112] Updating daemon set status
E0310 21:12:12.732606       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:12.732621       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:12.732640       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:12.733395       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:12.733407       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:12.733425       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:12.733632       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:12.733639       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:12.733653       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0310 21:12:12.733876       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0310 21:12:12.733882       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0310 21:12:12.733895       1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0310 21:12:12.744393       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (12.992905ms)
I0310 21:12:12.744432       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0310 21:12:12.745036       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052b2ba24f81, ext:45795841665, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:12.745101       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb052b2c69500f, ext:45808883471, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:12.745115       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0310 21:12:12.745140       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
... skipping 101 lines ...
I0310 21:12:38.691137       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:12:38.719764       1 gc_controller.go:161] GC'ing orphaned
I0310 21:12:38.719807       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:12:38.721981       1 node_lifecycle_controller.go:868] Node capz-k9b0el-control-plane-gfrn9 is NotReady as of 2023-03-10 21:12:38.721964294 +0000 UTC m=+71.785748486. Adding it to the Taint queue.
E0310 21:12:39.309465       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0310 21:12:39.309701       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0310 21:12:40.690115       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0310 21:12:43.480928       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-control-plane-gfrn9"
I0310 21:12:43.723308       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-control-plane-gfrn9 ReadyCondition updated. Updating timestamp.
I0310 21:12:43.723354       1 node_lifecycle_controller.go:868] Node capz-k9b0el-control-plane-gfrn9 is NotReady as of 2023-03-10 21:12:43.723342267 +0000 UTC m=+76.787126359. Adding it to the Taint queue.
I0310 21:12:45.083970       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-wdcijb" (12.1µs)
I0310 21:12:45.580962       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-9ujjh1" (9.5µs)
I0310 21:12:45.848663       1 disruption.go:427] updatePod called on pod "calico-node-lc8bp"
... skipping 240 lines ...
I0310 21:12:58.723453       1 disruption.go:427] updatePod called on pod "coredns-bd6b6df9f-sgh9d"
I0310 21:12:58.723495       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-bd6b6df9f-sgh9d, PodDisruptionBudget controller will avoid syncing.
I0310 21:12:58.723600       1 disruption.go:430] No matching pdb for pod "coredns-bd6b6df9f-sgh9d"
I0310 21:12:58.723792       1 replica_set.go:443] Pod coredns-bd6b6df9f-sgh9d updated, objectMeta {Name:coredns-bd6b6df9f-sgh9d GenerateName:coredns-bd6b6df9f- Namespace:kube-system SelfLink: UID:35dc8dc1-b4ac-47c4-a774-24988f13b950 ResourceVersion:960 Generation:0 CreationTimestamp:2023-03-10 21:11:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:bd6b6df9f] Annotations:map[cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs:] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-bd6b6df9f UID:9579f085-0d7e-4bc1-bfbe-a4eacfabd09d Controller:0xc002f80360 BlockOwnerDeletion:0xc002f80361}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9579f085-0d7e-4bc1-bfbe-a4eacfabd09d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-10 21:11:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-10 21:12:53 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-10 21:12:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]} -> {Name:coredns-bd6b6df9f-sgh9d GenerateName:coredns-bd6b6df9f- Namespace:kube-system SelfLink: UID:35dc8dc1-b4ac-47c4-a774-24988f13b950 ResourceVersion:979 Generation:0 CreationTimestamp:2023-03-10 21:11:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:bd6b6df9f] Annotations:map[cni.projectcalico.org/containerID:4d7bc98395e14693220f8bf0a282d1df44ed47dfac3c8a52c9f9c152ea375d4b cni.projectcalico.org/podIP:192.168.186.69/32 cni.projectcalico.org/podIPs:192.168.186.69/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-bd6b6df9f UID:9579f085-0d7e-4bc1-bfbe-a4eacfabd09d Controller:0xc002f33dd0 BlockOwnerDeletion:0xc002f33dd1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9579f085-0d7e-4bc1-bfbe-a4eacfabd09d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-10 21:11:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-10 21:12:53 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-10 21:12:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0310 21:12:58.724021       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-bd6b6df9f", timestamp:time.Time{wall:0xc0fb0522ec50640a, ext:12807250186, loc:(*time.Location)(0x72c0b80)}}
I0310 21:12:58.724336       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-bd6b6df9f" (321.909µs)
I0310 21:12:58.727071       1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-k9b0el-control-plane-gfrn9 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-10 21:12:41 +0000 UTC,LastTransitionTime:2023-03-10 21:11:24 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-10 21:12:53 +0000 UTC,LastTransitionTime:2023-03-10 21:12:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0310 21:12:58.727144       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-control-plane-gfrn9 ReadyCondition updated. Updating timestamp.
I0310 21:12:58.727170       1 node_lifecycle_controller.go:892] Node capz-k9b0el-control-plane-gfrn9 is healthy again, removing all taints
I0310 21:12:58.727189       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0310 21:13:01.003845       1 disruption.go:427] updatePod called on pod "coredns-bd6b6df9f-hgp5b"
I0310 21:13:01.003914       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-bd6b6df9f-hgp5b, PodDisruptionBudget controller will avoid syncing.
I0310 21:13:01.003921       1 disruption.go:430] No matching pdb for pod "coredns-bd6b6df9f-hgp5b"
... skipping 111 lines ...
I0310 21:13:10.321131       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fb0539932325dd, ext:103384854649, loc:(*time.Location)(0x72c0b80)}}
I0310 21:13:10.321162       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fb0539932484a8, ext:103384944452, loc:(*time.Location)(0x72c0b80)}}
I0310 21:13:10.321169       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0310 21:13:10.321195       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0310 21:13:10.321207       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:13:10.321226       1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/calico-node" (1.033126ms)
W0310 21:13:10.707291       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0310 21:13:13.076231       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-fb49b9cf7-kkh4j"
I0310 21:13:13.076276       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-fb49b9cf7-kkh4j, PodDisruptionBudget controller will avoid syncing.
I0310 21:13:13.076282       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-fb49b9cf7-kkh4j"
I0310 21:13:13.076354       1 replica_set.go:443] Pod calico-kube-controllers-fb49b9cf7-kkh4j updated, objectMeta {Name:calico-kube-controllers-fb49b9cf7-kkh4j GenerateName:calico-kube-controllers-fb49b9cf7- Namespace:calico-system SelfLink: UID:0a3f6d94-b27a-4f56-b1f1-aae7645bb999 ResourceVersion:972 Generation:0 CreationTimestamp:2023-03-10 21:11:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb49b9cf7] Annotations:map[cni.projectcalico.org/containerID:1dd759c41d5585ef0c7ff44e36493bf7077d2848381f0d8b6a837fea359c7525 cni.projectcalico.org/podIP:192.168.186.67/32 cni.projectcalico.org/podIPs:192.168.186.67/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-fb49b9cf7 UID:f2d526fb-81ef-4dec-9e7b-d495320b7224 Controller:0xc002f333b7 BlockOwnerDeletion:0xc002f333b8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2d526fb-81ef-4dec-9e7b-d495320b7224\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"FIPS_MODE_ENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBE_CONTROLLERS_CONFIG_NAME\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-10 21:11:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-10 21:12:53 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-10 21:12:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]} -> {Name:calico-kube-controllers-fb49b9cf7-kkh4j GenerateName:calico-kube-controllers-fb49b9cf7- Namespace:calico-system SelfLink: UID:0a3f6d94-b27a-4f56-b1f1-aae7645bb999 ResourceVersion:1054 Generation:0 CreationTimestamp:2023-03-10 21:11:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb49b9cf7] Annotations:map[cni.projectcalico.org/containerID:1dd759c41d5585ef0c7ff44e36493bf7077d2848381f0d8b6a837fea359c7525 cni.projectcalico.org/podIP:192.168.186.67/32 cni.projectcalico.org/podIPs:192.168.186.67/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-fb49b9cf7 UID:f2d526fb-81ef-4dec-9e7b-d495320b7224 Controller:0xc001ecbd47 BlockOwnerDeletion:0xc001ecbd48}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:11:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2d526fb-81ef-4dec-9e7b-d495320b7224\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"FIPS_MODE_ENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBE_CONTROLLERS_CONFIG_NAME\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-10 21:11:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-10 21:12:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-10 21:13:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.186.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0310 21:13:13.076503       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-kube-controllers-fb49b9cf7", timestamp:time.Time{wall:0xc0fb0527e5ad0b2b, ext:32695881771, loc:(*time.Location)(0x72c0b80)}}
I0310 21:13:13.076575       1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-kube-controllers-fb49b9cf7" (76.802µs)
... skipping 38 lines ...
I0310 21:13:18.114700       1 disruption.go:418] No matching pdb for pod "calico-apiserver-86544bbddb-kknwg"
I0310 21:13:18.114717       1 replica_set.go:380] Pod calico-apiserver-86544bbddb-kknwg created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-apiserver-86544bbddb-kknwg", GenerateName:"calico-apiserver-86544bbddb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfbef4f1-76b1-41da-943d-5d0f86d8768d", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2023, time.March, 10, 21, 13, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86544bbddb"}, Annotations:map[string]string{"hash.operator.tigera.io/calico-apiserver-certs":"c2d8ea29ed05875029275f4d42419e48aed60bd5"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-apiserver-86544bbddb", UID:"8ab5f50b-0251-4ebc-8395-d609c19e041c", Controller:(*bool)(0xc00222195e), BlockOwnerDeletion:(*bool)(0xc00222195f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 10, 21, 13, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019fc240), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"calico-apiserver-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00169d540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-4pd8k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001474a40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-apiserver", Image:"docker.io/calico/apiserver:v3.25.0", Command:[]string(nil), Args:[]string{"--secure-port=5443", "--tls-private-key-file=/calico-apiserver-certs/tls.key", "--tls-cert-file=/calico-apiserver-certs/tls.crt"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.96.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"MULTI_INTERFACE_MODE", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"calico-apiserver-certs", ReadOnly:true, MountPath:"/calico-apiserver-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4pd8k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00169d600), ReadinessProbe:(*v1.Probe)(0xc00169d640), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001eddda0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002221cd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-apiserver", DeprecatedServiceAccount:"calico-apiserver", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003bf490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0019fc288), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002221db0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002221dd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002221dd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002221ddc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00297fb30), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0310 21:13:18.115037       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"calico-apiserver/calico-apiserver-86544bbddb", timestamp:time.Time{wall:0xc0fb053b84d5a41f, ext:111144894139, loc:(*time.Location)(0x72c0b80)}}
I0310 21:13:18.115817       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="calico-apiserver/calico-apiserver-86544bbddb-kknwg" podUID=bfbef4f1-76b1-41da-943d-5d0f86d8768d
I0310 21:13:18.115973       1 taint_manager.go:401] "Noticed pod update" pod="calico-apiserver/calico-apiserver-86544bbddb-kknwg"
I0310 21:13:18.116176       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-apiserver/calico-apiserver" duration="51.616116ms"
I0310 21:13:18.118522       1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-apiserver/calico-apiserver" err="Operation cannot be fulfilled on deployments.apps \"calico-apiserver\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:13:18.118639       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-apiserver/calico-apiserver" startTime="2023-03-10 21:13:18.118623131 +0000 UTC m=+111.182407223"
I0310 21:13:18.119149       1 deployment_util.go:775] Deployment "calico-apiserver" timed out (false) [last progress check: 2023-03-10 21:13:18 +0000 UTC - now: 2023-03-10 21:13:18.119143343 +0000 UTC m=+111.182927535]
I0310 21:13:18.122470       1 controller_utils.go:581] Controller calico-apiserver-86544bbddb created pod calico-apiserver-86544bbddb-kknwg
I0310 21:13:18.123780       1 event.go:294] "Event occurred" object="calico-apiserver/calico-apiserver-86544bbddb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-apiserver-86544bbddb-kknwg"
I0310 21:13:18.124542       1 endpoints_controller.go:551] Update endpoints for calico-apiserver/calico-api, ready: 0 not ready: 0
I0310 21:13:18.142744       1 endpointslicemirroring_controller.go:274] syncEndpoints("calico-apiserver/calico-api")
... skipping 327 lines ...
I0310 21:14:23.634921       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:14:23.696003       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:14:27.527588       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="115.602µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41082" resp=200
I0310 21:14:28.689011       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-k9b0el-md-0-sv4v2}
I0310 21:14:28.689721       1 taint_manager.go:441] "Updating known taints on node" node="capz-k9b0el-md-0-sv4v2" taints=[]
I0310 21:14:28.689833       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-sv4v2"
W0310 21:14:28.689874       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-k9b0el-md-0-sv4v2" does not exist
I0310 21:14:28.690940       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fb0539932484a8, ext:103384944452, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:28.691112       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fb054d293179a2, ext:181754892450, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:28.691178       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-k9b0el-md-0-sv4v2], creating 1
I0310 21:14:28.692102       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb05292c00783e, ext:37802012378, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:28.697134       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb054d298d5181, ext:181760911389, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:28.697221       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [capz-k9b0el-md-0-sv4v2], creating 1
... skipping 242 lines ...
I0310 21:14:37.792998       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fb054f6f44278a, ext:190856779814, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:37.793047       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [], creating 0
I0310 21:14:37.793084       1 daemon_controller.go:1029] Pods to delete for daemon set cloud-node-manager: [], deleting 0
I0310 21:14:37.793105       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:14:37.793152       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/cloud-node-manager" (936.421µs)
I0310 21:14:38.555166       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-ffl2x"
W0310 21:14:38.560243       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-k9b0el-md-0-ffl2x" does not exist
I0310 21:14:38.558755       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-k9b0el-md-0-ffl2x}
I0310 21:14:38.559735       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fb054db9299479, ext:184022810489, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:38.560573       1 taint_manager.go:441] "Updating known taints on node" node="capz-k9b0el-md-0-ffl2x" taints=[]
I0310 21:14:38.560638       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fb054fa169d7d4, ext:191624368852, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:38.560676       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-k9b0el-md-0-ffl2x], creating 1
I0310 21:14:38.560200       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fb054e2e334f66, ext:185838898790, loc:(*time.Location)(0x72c0b80)}}
... skipping 276 lines ...
I0310 21:14:47.428890       1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (28.901µs)
I0310 21:14:47.427673       1 replica_set.go:443] Pod calico-typha-c89c74f79-6ck7t updated, objectMeta {Name:calico-typha-c89c74f79-6ck7t GenerateName:calico-typha-c89c74f79- Namespace:calico-system SelfLink: UID:30dcb061-3f62-4c54-8455-5fb17105a326 ResourceVersion:1503 Generation:0 CreationTimestamp:2023-03-10 21:14:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:c89c74f79] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:ac4f54e7e99fefc6bf87494fdc87f3c5bcae9fd1 hash.operator.tigera.io/typha-certs:c8e92be4b7bcc1a48504b1592bcb42eec3ba5567] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-c89c74f79 UID:c2e74aaa-2339-43c1-83b5-38f587c50e10 Controller:0xc003393347 BlockOwnerDeletion:0xc003393348}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:14:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2e74aaa-2339-43c1-83b5-38f587c50e10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]} -> {Name:calico-typha-c89c74f79-6ck7t GenerateName:calico-typha-c89c74f79- Namespace:calico-system SelfLink: UID:30dcb061-3f62-4c54-8455-5fb17105a326 ResourceVersion:1505 Generation:0 CreationTimestamp:2023-03-10 21:14:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:c89c74f79] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:ac4f54e7e99fefc6bf87494fdc87f3c5bcae9fd1 hash.operator.tigera.io/typha-certs:c8e92be4b7bcc1a48504b1592bcb42eec3ba5567] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-c89c74f79 UID:c2e74aaa-2339-43c1-83b5-38f587c50e10 Controller:0xc002c3cbc7 BlockOwnerDeletion:0xc002c3cbc8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:14:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2e74aaa-2339-43c1-83b5-38f587c50e10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]}.
I0310 21:14:47.427828       1 taint_manager.go:401] "Noticed pod update" pod="calico-system/calico-typha-c89c74f79-6ck7t"
I0310 21:14:47.429289       1 taint_manager.go:362] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="calico-system/calico-typha-c89c74f79-6ck7t"
I0310 21:14:47.427851       1 controller_utils.go:122] "Update ready status of pods on node" node="capz-k9b0el-md-0-ffl2x"
I0310 21:14:47.427972       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="19.077637ms"
I0310 21:14:47.429676       1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-typha" err="Operation cannot be fulfilled on deployments.apps \"calico-typha\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:14:47.429828       1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-10 21:14:47.42979707 +0000 UTC m=+200.493581262"
I0310 21:14:47.430942       1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-c89c74f79" (35.035903ms)
I0310 21:14:47.431454       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-c89c74f79", timestamp:time.Time{wall:0xc0fb0551d79a20bf, ext:200459761087, loc:(*time.Location)(0x72c0b80)}}
I0310 21:14:47.431667       1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-c89c74f79, replicas 1->2 (need 2), fullyLabeledReplicas 1->2, readyReplicas 1->1, availableReplicas 1->1, sequence No: 2->2
I0310 21:14:47.431424       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-typha-c89c74f79"
I0310 21:14:47.442905       1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="13.091ms"
... skipping 233 lines ...
I0310 21:15:09.996360       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/csi-node-driver", timestamp:time.Time{wall:0xc0fb05577b632f51, ext:223060140113, loc:(*time.Location)(0x72c0b80)}}
I0310 21:15:09.996511       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set csi-node-driver: [], creating 0
I0310 21:15:09.997292       1 daemon_controller.go:1029] Pods to delete for daemon set csi-node-driver: [], deleting 0
I0310 21:15:09.997443       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:15:09.997566       1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/csi-node-driver" (2.749563ms)
I0310 21:15:13.808111       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-ffl2x ReadyCondition updated. Updating timestamp.
I0310 21:15:13.808514       1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-k9b0el-md-0-sv4v2 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-10 21:14:59 +0000 UTC,LastTransitionTime:2023-03-10 21:14:28 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-10 21:15:09 +0000 UTC,LastTransitionTime:2023-03-10 21:15:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0310 21:15:13.808699       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-sv4v2 ReadyCondition updated. Updating timestamp.
I0310 21:15:13.826368       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-sv4v2"
I0310 21:15:13.826955       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-k9b0el-md-0-sv4v2}
I0310 21:15:13.827005       1 taint_manager.go:441] "Updating known taints on node" node="capz-k9b0el-md-0-sv4v2" taints=[]
I0310 21:15:13.827021       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-k9b0el-md-0-sv4v2"
I0310 21:15:13.827912       1 node_lifecycle_controller.go:892] Node capz-k9b0el-md-0-sv4v2 is healthy again, removing all taints
... skipping 121 lines ...
I0310 21:15:21.369939       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0310 21:15:21.369962       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0310 21:15:21.369979       1 daemon_controller.go:1112] Updating daemon set status
I0310 21:15:21.370015       1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/calico-node" (1.034123ms)
I0310 21:15:23.638189       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:15:23.698686       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:15:23.830062       1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-k9b0el-md-0-ffl2x transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-10 21:15:09 +0000 UTC,LastTransitionTime:2023-03-10 21:14:38 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-10 21:15:19 +0000 UTC,LastTransitionTime:2023-03-10 21:15:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0310 21:15:23.830122       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-ffl2x ReadyCondition updated. Updating timestamp.
I0310 21:15:23.847236       1 node_lifecycle_controller.go:892] Node capz-k9b0el-md-0-ffl2x is healthy again, removing all taints
I0310 21:15:23.847264       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-ffl2x"
I0310 21:15:23.847855       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-k9b0el-md-0-ffl2x}
I0310 21:15:23.847934       1 taint_manager.go:441] "Updating known taints on node" node="capz-k9b0el-md-0-ffl2x" taints=[]
I0310 21:15:23.847949       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-k9b0el-md-0-ffl2x"
... skipping 172 lines ...
I0310 21:15:53.420313       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2023-03-10 21:15:53.407429031 +0000 UTC m=+266.471213123 - now: 2023-03-10 21:15:53.420304222 +0000 UTC m=+266.484088414]
I0310 21:15:53.420995       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0310 21:15:53.425021       1 disruption.go:415] addPod called on pod "csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.425117       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-ghrkj, PodDisruptionBudget controller will avoid syncing.
I0310 21:15:53.425149       1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.425318       1 controller_utils.go:581] Controller csi-azurefile-controller-7b7f546c46 created pod csi-azurefile-controller-7b7f546c46-ghrkj
I0310 21:15:53.425192       1 replica_set.go:380] Pod csi-azurefile-controller-7b7f546c46-ghrkj created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7b7f546c46-ghrkj", GenerateName:"csi-azurefile-controller-7b7f546c46-", Namespace:"kube-system", SelfLink:"", UID:"af49e3c6-afbd-4524-abc4-8fe2fc24833b", ResourceVersion:"1840", Generation:0, CreationTimestamp:time.Date(2023, time.March, 10, 21, 15, 53, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7b7f546c46"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7b7f546c46", UID:"522ca699-517a-48fe-bc55-2862d75390ee", Controller:(*bool)(0xc001dbd8a7), BlockOwnerDeletion:(*bool)(0xc001dbd8a8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 10, 21, 15, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d2e3f0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc000d2e408), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d2e420), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-bgrns", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0002615e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bgrns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bgrns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bgrns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bgrns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bgrns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000261fc0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bgrns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001fd5340), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dbdc50), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007bc000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dbdcc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dbdce0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001dbdce8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001dbdcec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002727260), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0310 21:15:53.425796       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0fb0562583e88b7, ext:266470535607, loc:(*time.Location)(0x72c0b80)}}
I0310 21:15:53.425873       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7b7f546c46-ghrkj" podUID=af49e3c6-afbd-4524-abc4-8fe2fc24833b
I0310 21:15:53.425955       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.425968       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7b7f546c46" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.437429       1 disruption.go:427] updatePod called on pod "csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.437647       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-ghrkj, PodDisruptionBudget controller will avoid syncing.
I0310 21:15:53.437787       1 disruption.go:430] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.438047       1 replica_set.go:443] Pod csi-azurefile-controller-7b7f546c46-ghrkj updated, objectMeta {Name:csi-azurefile-controller-7b7f546c46-ghrkj GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:af49e3c6-afbd-4524-abc4-8fe2fc24833b ResourceVersion:1840 Generation:0 CreationTimestamp:2023-03-10 21:15:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:522ca699-517a-48fe-bc55-2862d75390ee Controller:0xc001dbd8a7 BlockOwnerDeletion:0xc001dbd8a8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:15:53 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"522ca699-517a-48fe-bc55-2862d75390ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-7b7f546c46-ghrkj GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:af49e3c6-afbd-4524-abc4-8fe2fc24833b ResourceVersion:1841 Generation:0 CreationTimestamp:2023-03-10 21:15:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:522ca699-517a-48fe-bc55-2862d75390ee Controller:0xc001f5348e BlockOwnerDeletion:0xc001f5348f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:15:53 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"522ca699-517a-48fe-bc55-2862d75390ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0310 21:15:53.438515       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-ghrkj"
I0310 21:15:53.442642       1 controller_utils.go:581] Controller csi-azurefile-controller-7b7f546c46 created pod csi-azurefile-controller-7b7f546c46-2n94z
I0310 21:15:53.442899       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-7b7f546c46, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0310 21:15:53.443400       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="49.775326ms"
I0310 21:15:53.443578       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:15:53.443728       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2023-03-10 21:15:53.443713852 +0000 UTC m=+266.507497944"
I0310 21:15:53.444929       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7b7f546c46" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7b7f546c46-2n94z"
I0310 21:15:53.449110       1 disruption.go:415] addPod called on pod "csi-azurefile-controller-7b7f546c46-2n94z"
I0310 21:15:53.449303       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-2n94z, PodDisruptionBudget controller will avoid syncing.
I0310 21:15:53.449451       1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-2n94z"
I0310 21:15:53.449594       1 replica_set.go:380] Pod csi-azurefile-controller-7b7f546c46-2n94z created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7b7f546c46-2n94z", GenerateName:"csi-azurefile-controller-7b7f546c46-", Namespace:"kube-system", SelfLink:"", UID:"77a5804f-3376-4ffb-b2dd-a2f3896f3933", ResourceVersion:"1842", Generation:0, CreationTimestamp:time.Date(2023, time.March, 10, 21, 15, 53, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7b7f546c46"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7b7f546c46", UID:"522ca699-517a-48fe-bc55-2862d75390ee", Controller:(*bool)(0xc0020ed2a7), BlockOwnerDeletion:(*bool)(0xc0020ed2a8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 10, 21, 15, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001130cd8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001130d20), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001130d38), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-j88sn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000c433c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j88sn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j88sn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j88sn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j88sn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j88sn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000c435a0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-j88sn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002208580), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020ed720), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00040aa10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020ed7a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020ed7c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0020ed7c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020ed7cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002706cf0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0310 21:15:53.450215       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0fb0562583e88b7, ext:266470535607, loc:(*time.Location)(0x72c0b80)}}
I0310 21:15:53.450396       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7b7f546c46-2n94z" podUID=77a5804f-3376-4ffb-b2dd-a2f3896f3933
I0310 21:15:53.450580       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-2n94z"
I0310 21:15:53.450950       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2023-03-10 21:15:53 +0000 UTC - now: 2023-03-10 21:15:53.450944715 +0000 UTC m=+266.514728807]
I0310 21:15:53.473174       1 disruption.go:427] updatePod called on pod "csi-azurefile-controller-7b7f546c46-2n94z"
I0310 21:15:53.474022       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-2n94z, PodDisruptionBudget controller will avoid syncing.
... skipping 204 lines ...
I0310 21:16:02.781975       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-5b8fcdb667-lmz96" podUID=ba6a7d6e-6b0d-4666-a5f1-c97ef0c7735f
I0310 21:16:02.782027       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-snapshot-controller-5b8fcdb667-lmz96"
I0310 21:16:02.782155       1 controller_utils.go:581] Controller csi-snapshot-controller-5b8fcdb667 created pod csi-snapshot-controller-5b8fcdb667-lmz96
I0310 21:16:02.782194       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-5b8fcdb667, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0310 21:16:02.782488       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-5b8fcdb667" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-5b8fcdb667-lmz96"
I0310 21:16:02.790202       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="67.215103ms"
I0310 21:16:02.790233       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:16:02.790289       1 disruption.go:427] updatePod called on pod "csi-snapshot-controller-5b8fcdb667-r9gv8"
I0310 21:16:02.790307       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-5b8fcdb667-r9gv8, PodDisruptionBudget controller will avoid syncing.
I0310 21:16:02.790314       1 disruption.go:430] No matching pdb for pod "csi-snapshot-controller-5b8fcdb667-r9gv8"
I0310 21:16:02.790351       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2023-03-10 21:16:02.790246112 +0000 UTC m=+275.854030204"
I0310 21:16:02.790584       1 replica_set.go:443] Pod csi-snapshot-controller-5b8fcdb667-r9gv8 updated, objectMeta {Name:csi-snapshot-controller-5b8fcdb667-r9gv8 GenerateName:csi-snapshot-controller-5b8fcdb667- Namespace:kube-system SelfLink: UID:e54b4ba0-5896-4200-9b87-d2f0f3110893 ResourceVersion:1953 Generation:0 CreationTimestamp:2023-03-10 21:16:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:5b8fcdb667] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-5b8fcdb667 UID:61054f44-4df9-4554-995f-c27a51694981 Controller:0xc0027dc2c7 BlockOwnerDeletion:0xc0027dc2c8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61054f44-4df9-4554-995f-c27a51694981\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-5b8fcdb667-r9gv8 GenerateName:csi-snapshot-controller-5b8fcdb667- Namespace:kube-system SelfLink: UID:e54b4ba0-5896-4200-9b87-d2f0f3110893 ResourceVersion:1959 Generation:0 CreationTimestamp:2023-03-10 21:16:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:5b8fcdb667] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-5b8fcdb667 UID:61054f44-4df9-4554-995f-c27a51694981 Controller:0xc00294a3f7 BlockOwnerDeletion:0xc00294a3f8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61054f44-4df9-4554-995f-c27a51694981\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]}.
I0310 21:16:02.790825       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-snapshot-controller-5b8fcdb667-r9gv8"
... skipping 14 lines ...
I0310 21:16:02.825877       1 disruption.go:427] updatePod called on pod "csi-snapshot-controller-5b8fcdb667-lmz96"
I0310 21:16:02.825908       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-5b8fcdb667-lmz96, PodDisruptionBudget controller will avoid syncing.
I0310 21:16:02.825914       1 disruption.go:430] No matching pdb for pod "csi-snapshot-controller-5b8fcdb667-lmz96"
I0310 21:16:02.826027       1 replica_set.go:443] Pod csi-snapshot-controller-5b8fcdb667-lmz96 updated, objectMeta {Name:csi-snapshot-controller-5b8fcdb667-lmz96 GenerateName:csi-snapshot-controller-5b8fcdb667- Namespace:kube-system SelfLink: UID:ba6a7d6e-6b0d-4666-a5f1-c97ef0c7735f ResourceVersion:1960 Generation:0 CreationTimestamp:2023-03-10 21:16:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:5b8fcdb667] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-5b8fcdb667 UID:61054f44-4df9-4554-995f-c27a51694981 Controller:0xc00294aa77 BlockOwnerDeletion:0xc00294aa78}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61054f44-4df9-4554-995f-c27a51694981\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-5b8fcdb667-lmz96 GenerateName:csi-snapshot-controller-5b8fcdb667- Namespace:kube-system SelfLink: UID:ba6a7d6e-6b0d-4666-a5f1-c97ef0c7735f ResourceVersion:1965 Generation:0 CreationTimestamp:2023-03-10 21:16:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:5b8fcdb667] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-5b8fcdb667 UID:61054f44-4df9-4554-995f-c27a51694981 Controller:0xc002c28287 BlockOwnerDeletion:0xc002c28288}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-10 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61054f44-4df9-4554-995f-c27a51694981\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-10 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0310 21:16:02.833858       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-5b8fcdb667, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0310 21:16:02.836716       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="28.853845ms"
I0310 21:16:02.836765       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0310 21:16:02.836792       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2023-03-10 21:16:02.836779252 +0000 UTC m=+275.900563344"
I0310 21:16:02.837168       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2023-03-10 21:16:02 +0000 UTC - now: 2023-03-10 21:16:02.837163761 +0000 UTC m=+275.900947853]
I0310 21:16:02.837191       1 progress.go:195] Queueing up deployment "csi-snapshot-controller" for a progress check after 599s
I0310 21:16:02.837229       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="424.01µs"
I0310 21:16:02.837993       1 disruption.go:427] updatePod called on pod "csi-snapshot-controller-5b8fcdb667-r9gv8"
I0310 21:16:02.838014       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-5b8fcdb667-r9gv8, PodDisruptionBudget controller will avoid syncing.
... skipping 307 lines ...
I0310 21:17:59.332007       1 namespace_controller.go:180] Finished syncing namespace "azurefile-2514" (163.625642ms)
I0310 21:17:59.421161       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5368" (3.1µs)
I0310 21:17:59.540557       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8317" (5.39922ms)
I0310 21:17:59.544979       1 publisher.go:186] Finished syncing namespace "azurefile-8317" (9.627614ms)
I0310 21:18:00.487910       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3862
I0310 21:18:00.534827       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-3862, name default-token-z42lb, uid e8a326a6-1d19-44f9-80a7-3b3565133e30, event type delete
E0310 21:18:00.550642       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3862/default: secrets "default-token-54w7s" is forbidden: unable to create new content in namespace azurefile-3862 because it is being terminated
I0310 21:18:00.617616       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-3862, name kube-root-ca.crt, uid e02816c9-2a4a-47ac-8641-ecec1e0e015f, event type delete
I0310 21:18:00.620434       1 publisher.go:186] Finished syncing namespace "azurefile-3862" (2.771762ms)
I0310 21:18:00.642799       1 tokens_controller.go:252] syncServiceAccount(azurefile-3862/default), service account deleted, removing tokens
I0310 21:18:00.642843       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-3862, name default, uid f3c40078-eb6e-4372-8aab-6ffc09a43c1f, event type delete
I0310 21:18:00.642896       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3862" (1.4µs)
I0310 21:18:00.656027       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-3862, estimate: 0, errors: <nil>
... skipping 12 lines ...
I0310 21:18:00.870221       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:18:00.870328       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:18:00.870418       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:18:00.870435       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:18:00.870509       1 pv_controller.go:1775] operation "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" is already running, skipping
I0310 21:18:00.870626       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-8317/pvc-mhh49"
I0310 21:18:00.873153       1 azure_provision.go:108] failed to get azure provider
I0310 21:18:00.873181       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:18:00.873347       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:18:01.373331597 +0000 UTC m=+394.437115689 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:18:00.873638       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:18:01.828350       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9834
I0310 21:18:01.866426       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-9834, name kube-root-ca.crt, uid 045d145d-b1da-4d26-afb1-b51c205a129b, event type delete
I0310 21:18:01.867926       1 publisher.go:186] Finished syncing namespace "azurefile-9834" (1.503234ms)
I0310 21:18:01.925985       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-9834, name default-token-mr6s4, uid d6551234-936a-4dfd-8834-49b48f5e39a7, event type delete
E0310 21:18:01.939533       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9834/default: secrets "default-token-94vlw" is forbidden: unable to create new content in namespace azurefile-9834 because it is being terminated
I0310 21:18:01.981080       1 tokens_controller.go:252] syncServiceAccount(azurefile-9834/default), service account deleted, removing tokens
I0310 21:18:01.981123       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-9834, name default, uid 30ca44b1-cdf4-40c6-b0a4-0052d8f3be16, event type delete
I0310 21:18:01.981146       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9834" (3.3µs)
I0310 21:18:02.004883       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9834" (3.9µs)
I0310 21:18:02.005242       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-9834, estimate: 0, errors: <nil>
I0310 21:18:02.016454       1 namespace_controller.go:180] Finished syncing namespace "azurefile-9834" (193.129998ms)
... skipping 35 lines ...
I0310 21:18:08.707228       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:18:08.707251       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:18:08.707264       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:18:08.707275       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:18:08.707294       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:18:08.707301       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:18:08.710775       1 azure_provision.go:108] failed to get azure provider
I0310 21:18:08.710801       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:18:08.710841       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:18:09.71082758 +0000 UTC m=+402.774611772 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:18:08.711235       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:18:09.583954       1 namespace_controller.go:185] Namespace has been deleted azurefile-5368
I0310 21:18:09.583982       1 namespace_controller.go:180] Finished syncing namespace "azurefile-5368" (53.001µs)
I0310 21:18:09.697434       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:18:15.659570       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 12 items received
I0310 21:18:17.526914       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="172.004µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:33642" resp=200
I0310 21:18:18.739773       1 gc_controller.go:161] GC'ing orphaned
... skipping 4 lines ...
I0310 21:18:23.707312       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:18:23.707339       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:18:23.707345       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:18:23.707354       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:18:23.707368       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:18:23.707376       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:18:23.723516       1 azure_provision.go:108] failed to get azure provider
I0310 21:18:23.723545       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:18:23.723583       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:18:25.72356876 +0000 UTC m=+418.787352852 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:18:23.723608       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:18:26.632211       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 19 items received
I0310 21:18:27.526873       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.101µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54360" resp=200
I0310 21:18:30.493897       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 13 items received
I0310 21:18:31.630542       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 35 items received
I0310 21:18:33.187963       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0310 21:18:34.185773       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
... skipping 4 lines ...
I0310 21:18:38.707930       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:18:38.707951       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:18:38.707962       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:18:38.707973       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:18:38.707996       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:18:38.708008       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:18:38.723515       1 azure_provision.go:108] failed to get azure provider
I0310 21:18:38.723547       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:18:38.723582       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:18:42.723568057 +0000 UTC m=+435.787352249 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:18:38.723966       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:18:38.740842       1 gc_controller.go:161] GC'ing orphaned
I0310 21:18:38.740928       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:18:39.716811       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:18:42.815700       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:18:43.963089       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.PriorityLevelConfiguration total 0 items received
I0310 21:18:45.629460       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
... skipping 6 lines ...
I0310 21:18:53.708467       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:18:53.708554       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:18:53.708567       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:18:53.708578       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:18:53.708605       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:18:53.708619       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:18:53.710691       1 azure_provision.go:108] failed to get azure provider
I0310 21:18:53.710717       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:18:53.710751       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:19:01.710738494 +0000 UTC m=+454.774522586 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:18:53.710941       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:18:55.637928       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 1 items received
I0310 21:18:57.527183       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:59362" resp=200
I0310 21:18:58.741746       1 gc_controller.go:161] GC'ing orphaned
I0310 21:18:58.741779       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:19:00.675765       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 20 items received
I0310 21:19:04.195697       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 8 lines ...
I0310 21:19:08.708984       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:19:08.709007       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:19:08.709059       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:19:08.709072       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:19:08.709090       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:19:08.709129       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:19:08.713614       1 azure_provision.go:108] failed to get azure provider
I0310 21:19:08.713638       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:19:08.713719       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:19:24.713681116 +0000 UTC m=+477.777465308 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:19:08.714035       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:19:09.753475       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:19:10.652403       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0310 21:19:14.658487       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 0 items received
I0310 21:19:17.528591       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="87.902µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49536" resp=200
I0310 21:19:18.742252       1 gc_controller.go:161] GC'ing orphaned
I0310 21:19:18.742285       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 20 lines ...
I0310 21:19:38.711086       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:19:38.711125       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:19:38.711154       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:19:38.711188       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:19:38.711276       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:19:38.711309       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:19:38.730753       1 azure_provision.go:108] failed to get azure provider
I0310 21:19:38.730784       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:19:38.730825       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:20:10.730810005 +0000 UTC m=+523.794594197 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:19:38.731094       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:19:38.742541       1 gc_controller.go:161] GC'ing orphaned
I0310 21:19:38.742809       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:19:39.774053       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:19:39.811662       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:19:43.610552       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:19:43.652982       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 0 items received
... skipping 42 lines ...
I0310 21:20:23.712327       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:20:23.712421       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:20:23.712436       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:20:23.712447       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:20:23.712486       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:20:23.712499       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:20:23.723306       1 azure_provision.go:108] failed to get azure provider
I0310 21:20:23.723341       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:20:23.723372       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:21:27.723358015 +0000 UTC m=+600.787142207 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:20:23.724176       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:20:27.526830       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="98.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54308" resp=200
I0310 21:20:35.630521       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0310 21:20:36.427704       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 11 items received
I0310 21:20:37.526768       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="98.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52042" resp=200
I0310 21:20:38.654195       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:20:38.712889       1 pv_controller_base.go:556] resyncing PV controller
... skipping 68 lines ...
I0310 21:21:38.716774       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:21:38.716835       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: no volume found
I0310 21:21:38.716850       1 pv_controller.go:1455] provisionClaim[azurefile-8317/pvc-mhh49]: started
I0310 21:21:38.716874       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]]
I0310 21:21:38.716917       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8317/pvc-mhh49] started, class: "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl"
I0310 21:21:38.716930       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8317/pvc-mhh49]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:21:38.726259       1 azure_provision.go:108] failed to get azure provider
I0310 21:21:38.726289       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8317/pvc-mhh49" with StorageClass "azurefile-8317-kubernetes.io-azure-file-dynamic-sc-7c2tl": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:21:38.726351       1 goroutinemap.go:150] Operation for "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" failed. No retries permitted until 2023-03-10 21:23:40.726335811 +0000 UTC m=+733.790119903 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:21:38.726539       1 event.go:294] "Event occurred" object="azurefile-8317/pvc-mhh49" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:21:38.747341       1 gc_controller.go:161] GC'ing orphaned
I0310 21:21:38.747370       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:21:38.928683       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0310 21:21:38.974349       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-sv4v2"
I0310 21:21:39.866287       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:21:40.000849       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-10 21:21:40.000787482 +0000 UTC m=+613.064571674"
... skipping 101 lines ...
I0310 21:23:07.113593       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:23:07.113610       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:23:07.113616       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:23:07.113626       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:23:07.113632       1 pv_controller.go:1775] operation "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" is already running, skipping
I0310 21:23:07.113665       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-1279/pvc-mlnmg"
I0310 21:23:07.115153       1 azure_provision.go:108] failed to get azure provider
I0310 21:23:07.115179       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:23:07.115225       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:23:07.615213274 +0000 UTC m=+700.678997366 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:23:07.115312       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:23:07.527218       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="110.202µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41326" resp=200
I0310 21:23:08.045408       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8317
I0310 21:23:08.080119       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-8317, name pvc-mhh49.174b2b7814f323fa, uid 2174d7a7-16bc-49f2-b4ff-5caaaa6ea46b, event type delete
I0310 21:23:08.112299       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8317, name default-token-cv4w9, uid 5d96ec56-6c6c-4a51-a329-87c2dc578ed9, event type delete
I0310 21:23:08.123904       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8317/pvc-mhh49" with version 3599
I0310 21:23:08.124361       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8317/pvc-mhh49]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 3 lines ...
I0310 21:23:08.124643       1 pv_controller.go:1777] operation "provision-azurefile-8317/pvc-mhh49[24e8aac0-e317-4903-adbc-f9861c482618]" postponed due to exponential backoff
I0310 21:23:08.124334       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-8317/pvc-mhh49"
I0310 21:23:08.124812       1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-8317/pvc-mhh49"
I0310 21:23:08.124875       1 pvc_protection_controller.go:230] "Looking for Pods using PVC in the Informer's cache" PVC="azurefile-8317/pvc-mhh49"
I0310 21:23:08.124925       1 pvc_protection_controller.go:251] "No Pod using PVC was found in the Informer's cache" PVC="azurefile-8317/pvc-mhh49"
I0310 21:23:08.125007       1 pvc_protection_controller.go:256] "Looking for Pods using PVC with a live list" PVC="azurefile-8317/pvc-mhh49"
E0310 21:23:08.128618       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8317/default: secrets "default-token-wdvls" is forbidden: unable to create new content in namespace azurefile-8317 because it is being terminated
I0310 21:23:08.140654       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azurefile-8317/pvc-mhh49"
I0310 21:23:08.147638       1 pv_controller_base.go:286] claim "azurefile-8317/pvc-mhh49" deleted
I0310 21:23:08.147679       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=persistentvolumeclaims, namespace azurefile-8317, name pvc-mhh49, uid 24e8aac0-e317-4903-adbc-f9861c482618, event type delete
I0310 21:23:08.147885       1 pvc_protection_controller.go:207] "Removed protection finalizer from PVC" PVC="azurefile-8317/pvc-mhh49"
I0310 21:23:08.147909       1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-8317/pvc-mhh49" duration="23.037003ms"
I0310 21:23:08.164275       1 tokens_controller.go:252] syncServiceAccount(azurefile-8317/default), service account deleted, removing tokens
... skipping 11 lines ...
I0310 21:23:08.720903       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:23:08.720928       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:23:08.720941       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:23:08.720953       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:23:08.720974       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:23:08.720988       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:23:08.725236       1 azure_provision.go:108] failed to get azure provider
I0310 21:23:08.725267       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:23:08.725300       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:23:09.725282247 +0000 UTC m=+702.789066339 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:23:08.725333       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:23:09.372159       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-61
I0310 21:23:09.460980       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-61, name kube-root-ca.crt, uid 53c77f3b-08f4-4775-b57a-ae5bc7133217, event type delete
I0310 21:23:09.462722       1 publisher.go:186] Finished syncing namespace "azurefile-61" (1.760439ms)
I0310 21:23:09.514097       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-61, name default-token-cx7n5, uid 44a7ad4d-88a7-4722-8508-a7c78b44940a, event type delete
E0310 21:23:09.531698       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-61/default: secrets "default-token-rp8wd" is forbidden: unable to create new content in namespace azurefile-61 because it is being terminated
I0310 21:23:09.546148       1 tokens_controller.go:252] syncServiceAccount(azurefile-61/default), service account deleted, removing tokens
I0310 21:23:09.546693       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-61, name default, uid d5e896ca-0661-46d7-a751-d7bc00115980, event type delete
I0310 21:23:09.546715       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-61" (2.7µs)
I0310 21:23:09.563505       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-61" (2.3µs)
I0310 21:23:09.564105       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-61, estimate: 0, errors: <nil>
I0310 21:23:09.580624       1 namespace_controller.go:180] Finished syncing namespace "azurefile-61" (212.157764ms)
... skipping 29 lines ...
I0310 21:23:23.722029       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:23:23.722060       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:23:23.722066       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:23:23.722077       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:23:23.722093       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:23:23.722105       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:23:23.730024       1 azure_provision.go:108] failed to get azure provider
I0310 21:23:23.730054       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:23:23.730080       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:23:25.730067501 +0000 UTC m=+718.793851593 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:23:23.730183       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:23:27.526770       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.602µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47152" resp=200
I0310 21:23:37.526926       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.702µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57816" resp=200
I0310 21:23:38.564580       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:23:38.660592       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:23:38.722396       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:23:38.722620       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-1279/pvc-mlnmg" with version 3590
I0310 21:23:38.722645       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:23:38.722670       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:23:38.722729       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:23:38.722748       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:23:38.722823       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:23:38.722838       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:23:38.725827       1 azure_provision.go:108] failed to get azure provider
I0310 21:23:38.725851       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:23:38.725883       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:23:42.725871185 +0000 UTC m=+735.789655277 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:23:38.726173       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:23:38.751997       1 gc_controller.go:161] GC'ing orphaned
I0310 21:23:38.752031       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:23:39.952871       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:23:47.529800       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="112.803µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:38522" resp=200
I0310 21:23:53.661674       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:23:53.722641       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:23:53.722720       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-1279/pvc-mlnmg" with version 3590
I0310 21:23:53.722737       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:23:53.722760       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:23:53.722765       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:23:53.722776       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:23:53.722806       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:23:53.722813       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:23:53.740633       1 azure_provision.go:108] failed to get azure provider
I0310 21:23:53.740662       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:23:53.740712       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:24:01.740698292 +0000 UTC m=+754.804482384 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:23:53.740814       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:23:57.527539       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.303µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47180" resp=200
I0310 21:23:58.752113       1 gc_controller.go:161] GC'ing orphaned
I0310 21:23:58.752146       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:24:01.717017       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 0 items received
I0310 21:24:02.198369       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:24:07.526435       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="95.603µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:37250" resp=200
... skipping 3 lines ...
I0310 21:24:08.723260       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:24:08.723313       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:24:08.723327       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:24:08.723339       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:24:08.723381       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:24:08.723392       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:24:08.732205       1 azure_provision.go:108] failed to get azure provider
I0310 21:24:08.732239       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:24:08.732271       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:24:24.732257839 +0000 UTC m=+777.796041931 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:24:08.732554       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:24:09.971973       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:24:16.639857       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 0 items received
I0310 21:24:17.527246       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.902µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43594" resp=200
I0310 21:24:18.752793       1 gc_controller.go:161] GC'ing orphaned
I0310 21:24:18.752826       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:24:23.663508       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 13 lines ...
I0310 21:24:38.724692       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:24:38.724747       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:24:38.724754       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:24:38.724765       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:24:38.724790       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:24:38.724801       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:24:38.727424       1 azure_provision.go:108] failed to get azure provider
I0310 21:24:38.727450       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:24:38.727483       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:25:10.727471056 +0000 UTC m=+823.791255248 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:24:38.727654       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:24:38.752984       1 gc_controller.go:161] GC'ing orphaned
I0310 21:24:38.753027       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:24:39.989306       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:24:42.798372       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:24:45.825464       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:24:47.527073       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="87.902µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34208" resp=200
... skipping 35 lines ...
I0310 21:25:23.726640       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:25:23.726692       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:25:23.726701       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:25:23.726725       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:25:23.726745       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:25:23.726758       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:25:23.735154       1 azure_provision.go:108] failed to get azure provider
I0310 21:25:23.735181       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:25:23.735246       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:26:27.73519633 +0000 UTC m=+900.798980522 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:25:23.735327       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:25:24.867224       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0310 21:25:27.505369       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:25:27.527315       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="102.402µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54792" resp=200
I0310 21:25:27.617129       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:25:30.192952       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:25:32.620326       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
... skipping 75 lines ...
I0310 21:26:38.730721       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:26:38.730753       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1279/pvc-mlnmg]: no volume found
I0310 21:26:38.730762       1 pv_controller.go:1455] provisionClaim[azurefile-1279/pvc-mlnmg]: started
I0310 21:26:38.730790       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]]
I0310 21:26:38.730813       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1279/pvc-mlnmg] started, class: "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6"
I0310 21:26:38.730821       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1279/pvc-mlnmg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:26:38.735855       1 azure_provision.go:108] failed to get azure provider
I0310 21:26:38.735884       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1279/pvc-mlnmg" with StorageClass "azurefile-1279-kubernetes.io-azure-file-dynamic-sc-76rh6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:26:38.735921       1 goroutinemap.go:150] Operation for "provision-azurefile-1279/pvc-mlnmg[1eefa96e-6277-42a7-bba1-385ccd1e0194]" failed. No retries permitted until 2023-03-10 21:28:40.735908086 +0000 UTC m=+1033.799692278 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:26:38.736132       1 event.go:294] "Event occurred" object="azurefile-1279/pvc-mlnmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:26:38.755685       1 gc_controller.go:161] GC'ing orphaned
I0310 21:26:38.755776       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:26:38.929278       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0310 21:26:40.074853       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:26:44.632760       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 0 items received
I0310 21:26:44.960476       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-sv4v2"
... skipping 111 lines ...
I0310 21:28:10.471951       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:28:10.471958       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:28:10.471968       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:28:10.471973       1 pv_controller.go:1775] operation "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" is already running, skipping
I0310 21:28:10.472007       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-8754/pvc-ttndg"
I0310 21:28:10.472444       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8754/pvc-ttndg" with version 4660
I0310 21:28:10.474232       1 azure_provision.go:108] failed to get azure provider
I0310 21:28:10.474316       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:28:10.474387       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:28:10.974372301 +0000 UTC m=+1004.038156493 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:28:10.474614       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:28:13.916651       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1279
I0310 21:28:13.934336       1 tokens_controller.go:252] syncServiceAccount(azurefile-1279/default), service account deleted, removing tokens
I0310 21:28:13.934541       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-1279, name default, uid 8e1db5f6-86a9-409d-9cd3-4b1e9167a281, event type delete
I0310 21:28:13.934575       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1279" (1.6µs)
I0310 21:28:13.941858       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-1279, name default-token-m24jr, uid 92ef2d6e-0497-40b9-a9a4-0d2d9828506d, event type delete
I0310 21:28:13.951647       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-1279, name pvc-mlnmg.174b2bbf62652ab9, uid a0a8cc98-4488-47c5-85a5-883eb11bca57, event type delete
... skipping 35 lines ...
I0310 21:28:23.735032       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:28:23.735065       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:28:23.735071       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:28:23.735118       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:28:23.735147       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:28:23.735157       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:28:23.743208       1 azure_provision.go:108] failed to get azure provider
I0310 21:28:23.743239       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:28:23.743278       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:28:24.743263672 +0000 UTC m=+1017.807047764 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:28:23.743516       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:28:24.267698       1 namespace_controller.go:185] Namespace has been deleted azurefile-1279
I0310 21:28:24.267849       1 namespace_controller.go:180] Finished syncing namespace "azurefile-1279" (185.704µs)
I0310 21:28:25.432157       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:28:27.526875       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="89.802µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44952" resp=200
I0310 21:28:36.531548       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:28:37.528295       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="137.903µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:40050" resp=200
... skipping 3 lines ...
I0310 21:28:38.735720       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:28:38.735752       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:28:38.735765       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:28:38.735776       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:28:38.735800       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:28:38.735812       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:28:38.739643       1 azure_provision.go:108] failed to get azure provider
I0310 21:28:38.739671       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:28:38.739707       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:28:40.739693816 +0000 UTC m=+1033.803477908 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:28:38.739987       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:28:38.758787       1 gc_controller.go:161] GC'ing orphaned
I0310 21:28:38.758820       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:28:40.157841       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:28:47.527196       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="87.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45070" resp=200
I0310 21:28:53.673556       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:28:53.735586       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:28:53.735649       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8754/pvc-ttndg" with version 4660
I0310 21:28:53.735801       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:28:53.735834       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:28:53.735896       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:28:53.735929       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:28:53.736037       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:28:53.736145       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:28:53.749561       1 azure_provision.go:108] failed to get azure provider
I0310 21:28:53.749657       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:28:53.750022       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
E0310 21:28:53.750103       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:28:57.750091047 +0000 UTC m=+1050.813875239 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:28:57.526541       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="116.402µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60910" resp=200
I0310 21:28:58.759020       1 gc_controller.go:161] GC'ing orphaned
I0310 21:28:58.759221       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:29:02.576865       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:29:05.710946       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 15 items received
I0310 21:29:07.527865       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="87.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43726" resp=200
... skipping 3 lines ...
I0310 21:29:08.736227       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:29:08.736300       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:29:08.736321       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:29:08.736345       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:29:08.736393       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:29:08.736426       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:29:08.739918       1 azure_provision.go:108] failed to get azure provider
I0310 21:29:08.739946       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:29:08.739981       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:29:16.739968224 +0000 UTC m=+1069.803752416 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:29:08.740215       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:29:10.180372       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:29:13.571728       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:29:14.244979       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:29:14.827821       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:29:17.527736       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="83.602µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52430" resp=200
I0310 21:29:18.760302       1 gc_controller.go:161] GC'ing orphaned
... skipping 5 lines ...
I0310 21:29:23.737390       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:29:23.737481       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:29:23.737493       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:29:23.737504       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:29:23.737527       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:29:23.737551       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:29:23.740268       1 azure_provision.go:108] failed to get azure provider
I0310 21:29:23.740295       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:29:23.740344       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:29:39.740328739 +0000 UTC m=+1092.804112931 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:29:23.740595       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:29:26.816179       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:29:27.068330       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.FlowSchema total 0 items received
I0310 21:29:27.530420       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="100.503µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:36694" resp=200
I0310 21:29:32.632115       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 10 items received
I0310 21:29:37.208802       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:29:37.527209       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="100.802µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57398" resp=200
... skipping 22 lines ...
I0310 21:29:53.738328       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:29:53.738358       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:29:53.738369       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:29:53.738380       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:29:53.738424       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:29:53.738461       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:29:53.751080       1 azure_provision.go:108] failed to get azure provider
I0310 21:29:53.751108       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:29:53.751276       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:30:25.751132043 +0000 UTC m=+1138.814916235 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:29:53.751389       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:29:56.000252       1 secrets.go:73] Expired bootstrap token in kube-system/bootstrap-token-wdcijb Secret: 2023-03-10T21:29:56Z
I0310 21:29:56.000285       1 tokencleaner.go:194] Deleting expired secret kube-system/bootstrap-token-wdcijb
I0310 21:29:56.016240       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-wdcijb" (16.003675ms)
I0310 21:29:56.016367       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-wdcijb, uid b8ffb7d6-c1ac-4e51-adb1-954a1c332fbc, event type delete
I0310 21:29:57.527433       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53444" resp=200
I0310 21:29:57.808876       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 36 lines ...
I0310 21:30:38.741236       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:30:38.741247       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:30:38.741270       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:30:38.741282       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:30:38.762620       1 gc_controller.go:161] GC'ing orphaned
I0310 21:30:38.762652       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:30:38.762818       1 azure_provision.go:108] failed to get azure provider
I0310 21:30:38.762843       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:30:38.762914       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:31:42.762888598 +0000 UTC m=+1215.826672690 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:30:38.763260       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:30:40.251516       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:30:47.526702       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.202µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:50594" resp=200
I0310 21:30:53.679663       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:30:53.742059       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:30:53.742173       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8754/pvc-ttndg" with version 4660
I0310 21:30:53.742243       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 63 lines ...
I0310 21:31:53.743712       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:31:53.743751       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:31:53.743771       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:31:53.743805       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
I0310 21:31:53.743841       1 pv_controller.go:1496] provisionClaimOperation [azurefile-8754/pvc-ttndg] started, class: "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26"
I0310 21:31:53.743874       1 pv_controller.go:1511] provisionClaimOperation [azurefile-8754/pvc-ttndg]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:31:53.747213       1 azure_provision.go:108] failed to get azure provider
I0310 21:31:53.747241       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8754/pvc-ttndg" with StorageClass "azurefile-8754-kubernetes.io-azure-file-dynamic-sc-5nb26": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:31:53.747766       1 event.go:294] "Event occurred" object="azurefile-8754/pvc-ttndg" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
E0310 21:31:53.747771       1 goroutinemap.go:150] Operation for "provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]" failed. No retries permitted until 2023-03-10 21:33:55.7477507 +0000 UTC m=+1348.811534892 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:31:54.017093       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-sv4v2 ReadyCondition updated. Updating timestamp.
I0310 21:31:57.527115       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.402µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53288" resp=200
I0310 21:31:58.765519       1 gc_controller.go:161] GC'ing orphaned
I0310 21:31:58.765551       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:31:59.227286       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-ffl2x"
I0310 21:32:04.018726       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-ffl2x ReadyCondition updated. Updating timestamp.
... skipping 81 lines ...
I0310 21:33:13.941439       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:33:13.941924       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:33:13.942203       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:33:13.942216       1 pv_controller.go:1775] operation "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" is already running, skipping
I0310 21:33:13.942167       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-3281/pvc-pqz9m"
I0310 21:33:13.942174       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-3281/pvc-pqz9m" with version 5723
I0310 21:33:13.943956       1 azure_provision.go:108] failed to get azure provider
I0310 21:33:13.943981       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:33:13.944017       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:33:14.444004224 +0000 UTC m=+1307.507788416 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:33:13.944227       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:33:17.381056       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8754
I0310 21:33:17.425916       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8754/pvc-ttndg" with version 5732
I0310 21:33:17.426117       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:33:17.426283       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8754/pvc-ttndg]: no volume found
I0310 21:33:17.426425       1 pv_controller.go:1455] provisionClaim[azurefile-8754/pvc-ttndg]: started
I0310 21:33:17.426556       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8754/pvc-ttndg[0190f92e-daf0-402b-afba-31a1642b582c]]
... skipping 6 lines ...
I0310 21:33:17.429128       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azurefile-8754/pvc-ttndg"
I0310 21:33:17.435895       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8754, name default-token-rj5gk, uid 33ac9c9b-b67f-4f36-b00a-6d08dd34404a, event type delete
I0310 21:33:17.438085       1 pvc_protection_controller.go:207] "Removed protection finalizer from PVC" PVC="azurefile-8754/pvc-ttndg"
I0310 21:33:17.438109       1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-8754/pvc-ttndg" duration="11.05285ms"
I0310 21:33:17.438758       1 pv_controller_base.go:286] claim "azurefile-8754/pvc-ttndg" deleted
I0310 21:33:17.438793       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=persistentvolumeclaims, namespace azurefile-8754, name pvc-ttndg, uid 0190f92e-daf0-402b-afba-31a1642b582c, event type delete
E0310 21:33:17.448320       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8754/default: secrets "default-token-dmx8s" is forbidden: unable to create new content in namespace azurefile-8754 because it is being terminated
I0310 21:33:17.484059       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-8754, name pvc-ttndg.174b2c0604027ea0, uid 1af5fcff-279f-4b4c-9a45-d9531454a0b6, event type delete
I0310 21:33:17.495792       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-8754, name kube-root-ca.crt, uid a3a53559-ae51-4d4b-9346-19edff18cc59, event type delete
I0310 21:33:17.501206       1 publisher.go:186] Finished syncing namespace "azurefile-8754" (5.373122ms)
I0310 21:33:17.518688       1 tokens_controller.go:252] syncServiceAccount(azurefile-8754/default), service account deleted, removing tokens
I0310 21:33:17.519062       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8754" (2.6µs)
I0310 21:33:17.519094       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8754, name default, uid 74ba06cf-5452-42a1-afc7-15c5fe6c9927, event type delete
... skipping 16 lines ...
I0310 21:33:23.747113       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:33:23.747281       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:33:23.747296       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:33:23.747308       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:33:23.747404       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:33:23.747417       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:33:23.750613       1 azure_provision.go:108] failed to get azure provider
I0310 21:33:23.750640       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:33:23.750725       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:33:24.75065603 +0000 UTC m=+1317.814440122 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:33:23.751074       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:33:24.127428       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 9 items received
I0310 21:33:27.527168       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="95.302µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:37438" resp=200
I0310 21:33:27.798506       1 namespace_controller.go:185] Namespace has been deleted azurefile-8754
I0310 21:33:27.798532       1 namespace_controller.go:180] Finished syncing namespace "azurefile-8754" (55.401µs)
I0310 21:33:32.730285       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 3 items received
I0310 21:33:36.550992       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 5 items received
... skipping 4 lines ...
I0310 21:33:38.748376       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:33:38.748418       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:33:38.748476       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:33:38.748540       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:33:38.748606       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:33:38.748679       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:33:38.757177       1 azure_provision.go:108] failed to get azure provider
I0310 21:33:38.757308       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:33:38.757407       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:33:40.757360478 +0000 UTC m=+1333.821144570 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:33:38.757772       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:33:38.768631       1 gc_controller.go:161] GC'ing orphaned
I0310 21:33:38.768720       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:33:40.366201       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:33:47.527018       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="94.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:48826" resp=200
I0310 21:33:52.437115       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0310 21:33:53.688563       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0310 21:33:53.749406       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:33:53.749482       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:33:53.749537       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:33:53.749552       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:33:53.749599       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:33:53.749614       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:33:53.754020       1 azure_provision.go:108] failed to get azure provider
I0310 21:33:53.754047       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:33:53.754085       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:33:57.754072203 +0000 UTC m=+1350.817856395 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:33:53.754303       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:33:54.487910       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0310 21:33:57.526689       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="123.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52798" resp=200
I0310 21:33:58.769392       1 gc_controller.go:161] GC'ing orphaned
I0310 21:33:58.769427       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:34:03.647526       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 4 items received
I0310 21:34:06.997116       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
... skipping 5 lines ...
I0310 21:34:08.749701       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:34:08.749777       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:34:08.749815       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:34:08.749841       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:34:08.749909       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:34:08.750007       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:34:08.754854       1 azure_provision.go:108] failed to get azure provider
I0310 21:34:08.755056       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:34:08.755140       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:34:16.755115152 +0000 UTC m=+1369.818899344 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:34:08.755194       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:34:09.199669       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0310 21:34:10.372212       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0310 21:34:10.384136       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:34:10.656286       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 9 items received
I0310 21:34:12.207635       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0310 21:34:14.871793       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 9 items received
... skipping 8 lines ...
I0310 21:34:23.750739       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:34:23.750784       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:34:23.750795       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:34:23.750830       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:34:23.750884       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:34:23.750913       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:34:23.757720       1 azure_provision.go:108] failed to get azure provider
I0310 21:34:23.757766       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:34:23.757950       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:34:39.757934159 +0000 UTC m=+1392.821718351 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:34:23.758052       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:34:27.526368       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="92.302µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56166" resp=200
I0310 21:34:27.552032       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0310 21:34:37.527305       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="122.503µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43510" resp=200
I0310 21:34:38.689562       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:34:38.751126       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:34:38.751381       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-3281/pvc-pqz9m" with version 5723
... skipping 14 lines ...
I0310 21:34:53.752660       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:34:53.752756       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:34:53.752767       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:34:53.752778       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:34:53.752799       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:34:53.752846       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:34:53.761504       1 azure_provision.go:108] failed to get azure provider
I0310 21:34:53.761534       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:34:53.761735       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:35:25.761718321 +0000 UTC m=+1438.825502513 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:34:53.761851       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:34:55.645135       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 4 items received
I0310 21:34:55.657224       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 1894 items received
I0310 21:34:57.527358       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="119.703µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56436" resp=200
I0310 21:34:58.771426       1 gc_controller.go:161] GC'ing orphaned
I0310 21:34:58.771461       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:35:04.628128       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 11 items received
... skipping 38 lines ...
I0310 21:35:38.754598       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:35:38.754633       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:35:38.754646       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:35:38.754657       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:35:38.754707       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:35:38.754720       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:35:38.764455       1 azure_provision.go:108] failed to get azure provider
I0310 21:35:38.764485       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:35:38.764963       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
E0310 21:35:38.764552       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:36:42.764538247 +0000 UTC m=+1515.828322439 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:35:38.773581       1 gc_controller.go:161] GC'ing orphaned
I0310 21:35:38.773607       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:35:40.471244       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:35:43.641959       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 6 items received
I0310 21:35:46.663679       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 6 items received
I0310 21:35:47.527339       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="179.804µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53670" resp=200
... skipping 65 lines ...
I0310 21:36:53.758039       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:36:53.758085       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
I0310 21:36:53.758119       1 pv_controller.go:1455] provisionClaim[azurefile-3281/pvc-pqz9m]: started
I0310 21:36:53.758145       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]]
I0310 21:36:53.758201       1 pv_controller.go:1496] provisionClaimOperation [azurefile-3281/pvc-pqz9m] started, class: "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x"
I0310 21:36:53.758229       1 pv_controller.go:1511] provisionClaimOperation [azurefile-3281/pvc-pqz9m]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:36:53.762776       1 azure_provision.go:108] failed to get azure provider
I0310 21:36:53.762986       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3281/pvc-pqz9m" with StorageClass "azurefile-3281-kubernetes.io-azure-file-dynamic-sc-hrx8x": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:36:53.763068       1 goroutinemap.go:150] Operation for "provision-azurefile-3281/pvc-pqz9m[b554c279-e1ac-4a5a-811c-f9e074ee91f6]" failed. No retries permitted until 2023-03-10 21:38:55.763020151 +0000 UTC m=+1648.826804343 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:36:53.763177       1 event.go:294] "Event occurred" object="azurefile-3281/pvc-pqz9m" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:36:55.651743       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 7 items received
I0310 21:36:57.526522       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="82.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43936" resp=200
I0310 21:36:58.777319       1 gc_controller.go:161] GC'ing orphaned
I0310 21:36:58.777421       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:36:58.783184       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-sv4v2"
I0310 21:36:59.062892       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-sv4v2 ReadyCondition updated. Updating timestamp.
... skipping 87 lines ...
I0310 21:38:17.288302       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:38:17.288308       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:38:17.288319       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:38:17.288318       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-1826/pvc-7h4gc" with version 6777
I0310 21:38:17.288362       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-1826/pvc-7h4gc"
I0310 21:38:17.288324       1 pv_controller.go:1775] operation "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" is already running, skipping
I0310 21:38:17.290027       1 azure_provision.go:108] failed to get azure provider
I0310 21:38:17.290046       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:38:17.290075       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:38:17.790062584 +0000 UTC m=+1610.853846676 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:38:17.290248       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:38:17.526646       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="81.602µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57298" resp=200
I0310 21:38:18.780826       1 gc_controller.go:161] GC'ing orphaned
I0310 21:38:18.780860       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:38:20.809535       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3281
I0310 21:38:20.835022       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-3281, name pvc-pqz9m.174b2c4cac35de40, uid 8035b4b0-c6d0-410b-8772-d1efaed9d474, event type delete
I0310 21:38:20.866207       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-3281, name kube-root-ca.crt, uid 7289ef3f-d6e5-4c3e-ba64-0714a61c396b, event type delete
I0310 21:38:20.869648       1 publisher.go:186] Finished syncing namespace "azurefile-3281" (3.355774ms)
I0310 21:38:20.876997       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-3281, name default-token-f6f6j, uid fd7186fc-e453-4f26-8f45-c4f660b59eba, event type delete
E0310 21:38:20.889949       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3281/default: secrets "default-token-s2bkj" is forbidden: unable to create new content in namespace azurefile-3281 because it is being terminated
I0310 21:38:20.946406       1 tokens_controller.go:252] syncServiceAccount(azurefile-3281/default), service account deleted, removing tokens
I0310 21:38:20.946454       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-3281, name default, uid 5856aee6-0aef-4685-9bff-453d6d84b46d, event type delete
I0310 21:38:20.947286       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3281" (1.4µs)
I0310 21:38:20.965738       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-3281/pvc-pqz9m" with version 6797
I0310 21:38:20.965769       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:38:20.965791       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3281/pvc-pqz9m]: no volume found
... skipping 20 lines ...
I0310 21:38:23.761847       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:38:23.761877       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:38:23.761889       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:38:23.761900       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:38:23.761953       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:38:23.761966       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:38:23.767126       1 azure_provision.go:108] failed to get azure provider
I0310 21:38:23.767155       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:38:23.767187       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:38:24.767172293 +0000 UTC m=+1617.830956485 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:38:23.767448       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:38:26.010248       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3281
I0310 21:38:26.210629       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-3281, estimate: 0, errors: <nil>
I0310 21:38:26.211067       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3281" (2.7µs)
I0310 21:38:26.256016       1 namespace_controller.go:180] Finished syncing namespace "azurefile-3281" (253.45604ms)
I0310 21:38:26.829012       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:38:27.527199       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="82.802µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43372" resp=200
... skipping 7 lines ...
I0310 21:38:38.763131       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:38:38.763218       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:38:38.763233       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:38:38.763246       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:38:38.763312       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:38:38.763326       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:38:38.780089       1 azure_provision.go:108] failed to get azure provider
I0310 21:38:38.780118       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:38:38.780153       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:38:40.78014062 +0000 UTC m=+1633.843924712 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:38:38.780359       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:38:38.781104       1 gc_controller.go:161] GC'ing orphaned
I0310 21:38:38.781290       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:38:38.835843       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:38:40.642294       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:38:46.989418       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.PriorityLevelConfiguration total 0 items received
I0310 21:38:47.527157       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="106.902µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34198" resp=200
... skipping 4 lines ...
I0310 21:38:53.763831       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:38:53.763869       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:38:53.763882       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:38:53.763895       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:38:53.763923       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:38:53.763936       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:38:53.767787       1 azure_provision.go:108] failed to get azure provider
I0310 21:38:53.767817       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:38:53.767999       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:38:57.767982845 +0000 UTC m=+1650.831767037 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:38:53.768064       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:38:57.526255       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.702µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42234" resp=200
I0310 21:38:58.781994       1 gc_controller.go:161] GC'ing orphaned
I0310 21:38:58.782034       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:38:59.629134       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0310 21:39:03.492001       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0310 21:39:07.526997       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="92.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:59126" resp=200
... skipping 3 lines ...
I0310 21:39:08.764460       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:39:08.764492       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:39:08.764788       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:39:08.764812       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:39:08.764875       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:39:08.764894       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:39:08.778295       1 azure_provision.go:108] failed to get azure provider
I0310 21:39:08.778324       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:39:08.778397       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:39:16.778379297 +0000 UTC m=+1669.842163489 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:39:08.778462       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:39:08.858645       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 70 items received
I0310 21:39:10.661809       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:39:17.528185       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="122.503µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:55288" resp=200
I0310 21:39:18.782594       1 gc_controller.go:161] GC'ing orphaned
I0310 21:39:18.782630       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:39:20.662017       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 8 items received
... skipping 3 lines ...
I0310 21:39:23.765521       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:39:23.765569       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:39:23.765576       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:39:23.765609       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:39:23.765643       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:39:23.765657       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:39:23.771493       1 azure_provision.go:108] failed to get azure provider
I0310 21:39:23.771518       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:39:23.771712       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:39:39.771537496 +0000 UTC m=+1692.835321688 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:39:23.771828       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:39:27.526383       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="83.202µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47264" resp=200
I0310 21:39:29.854739       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:39:37.527569       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="98.302µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:36652" resp=200
I0310 21:39:38.681189       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 9 items received
I0310 21:39:38.704551       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:39:38.765658       1 pv_controller_base.go:556] resyncing PV controller
... skipping 15 lines ...
I0310 21:39:53.766748       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:39:53.766779       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:39:53.766808       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:39:53.766831       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:39:53.766846       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:39:53.766856       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:39:53.770559       1 azure_provision.go:108] failed to get azure provider
I0310 21:39:53.770585       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:39:53.770765       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:40:25.770606214 +0000 UTC m=+1738.834390406 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:39:53.770883       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:39:54.653014       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 8 items received
I0310 21:39:57.527253       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="96.002µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41022" resp=200
I0310 21:39:58.130615       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 0 items received
I0310 21:39:58.783981       1 gc_controller.go:161] GC'ing orphaned
I0310 21:39:58.784039       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:40:04.846758       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 30 lines ...
I0310 21:40:38.769808       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:40:38.769833       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:40:38.769909       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:40:38.769927       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:40:38.770021       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:40:38.770136       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:40:38.773397       1 azure_provision.go:108] failed to get azure provider
I0310 21:40:38.773423       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:40:38.773459       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:41:42.773444808 +0000 UTC m=+1815.837229000 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:40:38.773655       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:40:38.785506       1 gc_controller.go:161] GC'ing orphaned
I0310 21:40:38.785715       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:40:40.725523       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:40:47.527937       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="133.803µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44536" resp=200
I0310 21:40:49.656222       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 17 items received
I0310 21:40:53.707921       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 63 lines ...
I0310 21:41:53.773320       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:41:53.773344       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:41:53.773357       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:41:53.773368       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
I0310 21:41:53.773388       1 pv_controller.go:1496] provisionClaimOperation [azurefile-1826/pvc-7h4gc] started, class: "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4"
I0310 21:41:53.773399       1 pv_controller.go:1511] provisionClaimOperation [azurefile-1826/pvc-7h4gc]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:41:53.775876       1 azure_provision.go:108] failed to get azure provider
I0310 21:41:53.776052       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-1826/pvc-7h4gc" with StorageClass "azurefile-1826-kubernetes.io-azure-file-dynamic-sc-jhlv4": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:41:53.776140       1 goroutinemap.go:150] Operation for "provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]" failed. No retries permitted until 2023-03-10 21:43:55.776090574 +0000 UTC m=+1948.839874766 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:41:53.776317       1 event.go:294] "Event occurred" object="azurefile-1826/pvc-7h4gc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:41:54.115302       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-control-plane-gfrn9 ReadyCondition updated. Updating timestamp.
I0310 21:41:57.527085       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="113.403µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53288" resp=200
I0310 21:41:58.734422       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 0 items received
I0310 21:41:58.788573       1 gc_controller.go:161] GC'ing orphaned
I0310 21:41:58.788622       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:41:59.676291       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 11 items received
... skipping 113 lines ...
I0310 21:43:24.038140       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6378/pvc-5vhgt" with version 7849
I0310 21:43:24.038376       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:43:24.038465       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:43:24.038595       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:43:24.038671       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:43:24.038756       1 pv_controller.go:1775] operation "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" is already running, skipping
I0310 21:43:24.039984       1 azure_provision.go:108] failed to get azure provider
I0310 21:43:24.040129       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:43:24.040236       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:43:24.540223108 +0000 UTC m=+1917.604007200 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:43:24.040500       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:43:24.158233       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1826
I0310 21:43:24.206270       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-1826, name default-token-sk8cr, uid 6a162396-67b1-4f09-9552-4d0d5ac4a098, event type delete
E0310 21:43:24.219187       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-1826/default: secrets "default-token-j6gtk" is forbidden: unable to create new content in namespace azurefile-1826 because it is being terminated
I0310 21:43:24.243350       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-1826, name pvc-7h4gc.174b2c934d0b76a4, uid b2c8f681-5719-4e32-921c-8bda8f549dbe, event type delete
I0310 21:43:24.254815       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-1826/pvc-7h4gc" with version 7859
I0310 21:43:24.254986       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:43:24.255015       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-1826/pvc-7h4gc]: no volume found
I0310 21:43:24.255021       1 pv_controller.go:1455] provisionClaim[azurefile-1826/pvc-7h4gc]: started
I0310 21:43:24.255031       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-1826/pvc-7h4gc[55241aba-0ec3-412c-bad6-a28f762d73ac]]
... skipping 47 lines ...
I0310 21:43:38.778881       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:43:38.778956       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:43:38.778971       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:43:38.778994       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:43:38.779038       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:43:38.779050       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:43:38.783930       1 azure_provision.go:108] failed to get azure provider
I0310 21:43:38.783957       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:43:38.784016       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:43:39.784002585 +0000 UTC m=+1932.847786677 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:43:38.784087       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:43:38.792329       1 gc_controller.go:161] GC'ing orphaned
I0310 21:43:38.792348       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:43:40.880409       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:43:47.527673       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="81.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45740" resp=200
I0310 21:43:48.854192       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:43:53.716224       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0310 21:43:53.779815       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:43:53.779838       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:43:53.779843       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:43:53.779853       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:43:53.779869       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:43:53.779876       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:43:53.792086       1 azure_provision.go:108] failed to get azure provider
I0310 21:43:53.792114       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:43:53.792580       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
E0310 21:43:53.792626       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:43:55.79261311 +0000 UTC m=+1948.856397302 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:43:57.519151       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:43:57.529055       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="105.002µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:55006" resp=200
I0310 21:43:58.792842       1 gc_controller.go:161] GC'ing orphaned
I0310 21:43:58.792874       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:44:00.841461       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0310 21:44:07.527288       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="102.503µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44450" resp=200
... skipping 3 lines ...
I0310 21:44:08.779942       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:44:08.779968       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:44:08.779979       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:44:08.779990       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:44:08.780018       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:44:08.780029       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:44:08.793835       1 azure_provision.go:108] failed to get azure provider
I0310 21:44:08.793866       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:44:08.793906       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:44:12.793892367 +0000 UTC m=+1965.857676559 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:44:08.794197       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:44:10.910699       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:44:15.657645       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 4 items received
I0310 21:44:17.527119       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="79.501µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41240" resp=200
I0310 21:44:18.793477       1 gc_controller.go:161] GC'ing orphaned
I0310 21:44:18.793509       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:44:19.717894       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 13 items received
... skipping 3 lines ...
I0310 21:44:23.780805       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:44:23.780868       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:44:23.780877       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:44:23.780888       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:44:23.780903       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:44:23.780937       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:44:23.787030       1 azure_provision.go:108] failed to get azure provider
I0310 21:44:23.787058       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:44:23.787129       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:44:31.787100462 +0000 UTC m=+1984.850884654 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:44:23.787277       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:44:24.676226       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 4 items received
I0310 21:44:27.194979       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 3 items received
I0310 21:44:27.528332       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="83.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41466" resp=200
I0310 21:44:37.526591       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.302µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:58960" resp=200
I0310 21:44:38.717684       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0310 21:44:38.780897       1 pv_controller_base.go:556] resyncing PV controller
I0310 21:44:38.780950       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6378/pvc-5vhgt" with version 7849
I0310 21:44:38.780967       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:44:38.780990       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:44:38.780995       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:44:38.781005       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:44:38.781019       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:44:38.781027       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:44:38.793349       1 azure_provision.go:108] failed to get azure provider
I0310 21:44:38.793376       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:44:38.793419       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:44:54.793401536 +0000 UTC m=+2007.857185728 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:44:38.793500       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:44:38.793639       1 gc_controller.go:161] GC'ing orphaned
I0310 21:44:38.793651       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:44:40.940342       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:44:47.544400       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="96.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45514" resp=200
I0310 21:44:49.644314       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0310 21:44:51.651337       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 3 items received
... skipping 20 lines ...
I0310 21:45:08.781657       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:45:08.781684       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:45:08.781744       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:45:08.781762       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:45:08.781810       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:45:08.781826       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:45:08.784092       1 azure_provision.go:108] failed to get azure provider
I0310 21:45:08.784117       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:45:08.784178       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:45:40.784164929 +0000 UTC m=+2053.847949021 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:45:08.784248       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:45:10.214459       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0310 21:45:10.958365       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:45:12.656232       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 3 items received
I0310 21:45:15.138832       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 4 items received
I0310 21:45:17.527250       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.302µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43634" resp=200
I0310 21:45:18.794262       1 gc_controller.go:161] GC'ing orphaned
... skipping 32 lines ...
I0310 21:45:53.783599       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:45:53.784060       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:45:53.784127       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:45:53.784184       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:45:53.784302       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:45:53.784390       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:45:53.800716       1 azure_provision.go:108] failed to get azure provider
I0310 21:45:53.800946       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:45:53.801028       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:46:57.801012891 +0000 UTC m=+2130.864796983 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:45:53.801489       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:45:57.527334       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="93.502µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53372" resp=200
I0310 21:45:58.795944       1 gc_controller.go:161] GC'ing orphaned
I0310 21:45:58.795993       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0310 21:46:04.694977       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0310 21:46:05.232245       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 5 items received
I0310 21:46:07.528381       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="116.603µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:50094" resp=200
... skipping 65 lines ...
I0310 21:47:08.787586       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:47:08.787663       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:47:08.787678       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:47:08.787690       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
I0310 21:47:08.787741       1 pv_controller.go:1496] provisionClaimOperation [azurefile-6378/pvc-5vhgt] started, class: "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9"
I0310 21:47:08.787757       1 pv_controller.go:1511] provisionClaimOperation [azurefile-6378/pvc-5vhgt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0310 21:47:08.793613       1 azure_provision.go:108] failed to get azure provider
I0310 21:47:08.793638       1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6378/pvc-5vhgt" with StorageClass "azurefile-6378-kubernetes.io-azure-file-dynamic-sc-69gg9": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0310 21:47:08.793668       1 goroutinemap.go:150] Operation for "provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]" failed. No retries permitted until 2023-03-10 21:49:10.793653206 +0000 UTC m=+2263.857437298 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0310 21:47:08.793759       1 event.go:294] "Event occurred" object="azurefile-6378/pvc-5vhgt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0310 21:47:10.646170       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 6 items received
I0310 21:47:11.065079       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0310 21:47:12.881230       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-k9b0el-md-0-sv4v2"
I0310 21:47:14.168152       1 node_lifecycle_controller.go:1046] Node capz-k9b0el-md-0-sv4v2 ReadyCondition updated. Updating timestamp.
I0310 21:47:17.526534       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="96.002µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:33750" resp=200
I0310 21:47:18.797563       1 gc_controller.go:161] GC'ing orphaned
... skipping 79 lines ...
I0310 21:48:28.705569       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-291" (8.414193ms)
I0310 21:48:29.925328       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-291" (3.4µs)
I0310 21:48:30.048578       1 publisher.go:186] Finished syncing namespace "azurefile-5684" (8.686799ms)
I0310 21:48:30.051226       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5684" (11.816871ms)
I0310 21:48:30.938892       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6378
I0310 21:48:30.970217       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-6378, name default-token-nftlx, uid fee5b735-7f39-462f-b226-0b29d247c377, event type delete
E0310 21:48:30.983167       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-6378/default: secrets "default-token-h84ds" is forbidden: unable to create new content in namespace azurefile-6378 because it is being terminated
I0310 21:48:31.019193       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-6378, name pvc-5vhgt.174b2cdab8c7797c, uid 1ab708ed-a129-47fe-a3d3-55ec9a62679b, event type delete
I0310 21:48:31.065850       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6378/pvc-5vhgt" with version 8944
I0310 21:48:31.065877       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0310 21:48:31.065897       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6378/pvc-5vhgt]: no volume found
I0310 21:48:31.065903       1 pv_controller.go:1455] provisionClaim[azurefile-6378/pvc-5vhgt]: started
I0310 21:48:31.066129       1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6378/pvc-5vhgt[caa67c10-1919-4120-866b-42773234d7ad]]
... skipping 19 lines ...
I0310 21:48:31.142560       1 namespace_controller.go:157] Content remaining in namespace azurefile-6378, waiting 8 seconds
I0310 21:48:31.260141       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5684" (3.1µs)
I0310 21:48:31.385378       1 publisher.go:186] Finished syncing namespace "azurefile-5363" (8.675198ms)
I0310 21:48:31.385672       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5363" (9.026107ms)
I0310 21:48:32.264502       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3418
I0310 21:48:32.382325       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-3418, name default-token-slgkp, uid 89706f66-0d93-4538-a5a5-f2ce15a1be54, event type delete
E0310 21:48:32.397744       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3418/default: secrets "default-token-n8w67" is forbidden: unable to create new content in namespace azurefile-3418 because it is being terminated
I0310 21:48:32.438046       1 tokens_controller.go:252] syncServiceAccount(azurefile-3418/default), service account deleted, removing tokens
I0310 21:48:32.438103       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-3418, name default, uid d2af0e30-e632-495b-9e3c-dbddd6738299, event type delete
I0310 21:48:32.438125       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3418" (1.5µs)
I0310 21:48:32.484858       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-3418, name kube-root-ca.crt, uid ed088501-fa17-4a63-9d58-0a51cb89e603, event type delete
I0310 21:48:32.486836       1 publisher.go:186] Finished syncing namespace "azurefile-3418" (1.706539ms)
I0310 21:48:32.501165       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3418" (2.6µs)
... skipping 4 lines ...
I0310 21:48:32.707982       1 publisher.go:186] Finished syncing namespace "azurefile-266" (5.394624ms)
I0310 21:48:32.710570       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-266" (8.29399ms)
I0310 21:48:33.588512       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9740
I0310 21:48:33.610633       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-9740, name kube-root-ca.crt, uid 6584e851-b675-49e1-8e40-03b8f0a2eb45, event type delete
I0310 21:48:33.612506       1 publisher.go:186] Finished syncing namespace "azurefile-9740" (1.642138ms)
I0310 21:48:33.687747       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-9740, name default-token-zswb9, uid 8d47d4b0-5c2e-45dd-b6a4-194d1756e899, event type delete
E0310 21:48:33.701107       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9740/default: secrets "default-token-759hb" is forbidden: unable to create new content in namespace azurefile-9740 because it is being terminated
I0310 21:48:33.757422       1 tokens_controller.go:252] syncServiceAccount(azurefile-9740/default), service account deleted, removing tokens
I0310 21:48:33.757466       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-9740, name default, uid 84739bc4-a7f7-47c3-845c-e9cac040aa8d, event type delete
I0310 21:48:33.757550       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9740" (1.4µs)
I0310 21:48:33.770478       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-9740, estimate: 0, errors: <nil>
I0310 21:48:33.772025       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9740" (2.7µs)
I0310 21:48:33.786161       1 namespace_controller.go:180] Finished syncing namespace "azurefile-9740" (201.964211ms)
I0310 21:48:33.894494       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-266" (3.2µs)
I0310 21:48:34.052929       1 publisher.go:186] Finished syncing namespace "azurefile-1143" (19.75065ms)
I0310 21:48:34.053515       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1143" (20.552269ms)
I0310 21:48:34.930255       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-291
I0310 21:48:34.980305       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-291, name kube-root-ca.crt, uid 20dac2d6-62fc-421b-9aff-827f57c8c48d, event type delete
I0310 21:48:34.983228       1 publisher.go:186] Finished syncing namespace "azurefile-291" (2.875665ms)
I0310 21:48:35.103197       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-291, name default-token-b56th, uid ae0748d2-20eb-4b63-b37f-d41b23e34e67, event type delete
E0310 21:48:35.119620       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-291/default: secrets "default-token-t7vt5" is forbidden: unable to create new content in namespace azurefile-291 because it is being terminated
I0310 21:48:35.121561       1 tokens_controller.go:252] syncServiceAccount(azurefile-291/default), service account deleted, removing tokens
I0310 21:48:35.121595       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-291, name default, uid 033ef18f-b404-4441-ba7a-8b415030b37c, event type delete
I0310 21:48:35.121616       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-291" (1.7µs)
I0310 21:48:35.131915       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-291" (3.601µs)
I0310 21:48:35.132649       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-291, estimate: 0, errors: <nil>
I0310 21:48:35.147928       1 namespace_controller.go:180] Finished syncing namespace "azurefile-291" (221.839764ms)
... skipping 20 lines ...
I0310 21:48:37.213664       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0310 21:48:37.502481       1 namespace_controller.go:185] Namespace has been deleted azurefile-3418
I0310 21:48:37.502503       1 namespace_controller.go:180] Finished syncing namespace "azurefile-3418" (46.301µs)
I0310 21:48:37.527205       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="101.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60810" resp=200
I0310 21:48:37.592233       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-5363
I0310 21:48:37.642531       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-5363, name default-token-6pv59, uid 38a8f8a7-72de-4a60-ae5c-b2b573c884b4, event type delete
E0310 21:48:37.654677       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-5363/default: secrets "default-token-c6wkh" is forbidden: unable to create new content in namespace azurefile-5363 because it is being terminated
I0310 21:48:37.713526       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-5363, name kube-root-ca.crt, uid d6876b4c-8a5b-491c-86c6-b2939939dc46, event type delete
I0310 21:48:37.717927       1 publisher.go:186] Finished syncing namespace "azurefile-5363" (4.353799ms)
I0310 21:48:37.732949       1 tokens_controller.go:252] syncServiceAccount(azurefile-5363/default), service account deleted, removing tokens
I0310 21:48:37.733079       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-5363, name default, uid 80093576-4e31-4fc9-a80b-4dbea8309672, event type delete
I0310 21:48:37.733172       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5363" (2.1µs)
I0310 21:48:37.763283       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5363" (2.6µs)
... skipping 56 lines ...
I0310 21:48:41.990637       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4789" (5.255722ms)
I0310 21:48:41.992807       1 publisher.go:186] Finished syncing namespace "azurefile-4789" (7.589775ms)
I0310 21:48:42.764292       1 namespace_controller.go:185] Namespace has been deleted azurefile-5363
I0310 21:48:42.764319       1 namespace_controller.go:180] Finished syncing namespace "azurefile-5363" (54.201µs)
I0310 21:48:42.898667       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-217
I0310 21:48:42.991766       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-217, name default-token-vgcqz, uid 41f48572-3815-43bb-84ee-ccc27f8e7c7f, event type delete
E0310 21:48:43.004358       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-217/default: secrets "default-token-xnnqt" is forbidden: unable to create new content in namespace azurefile-217 because it is being terminated
I0310 21:48:43.032161       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-217, name kube-root-ca.crt, uid 968544cc-f6e6-490d-b47f-cccd3bcb0a16, event type delete
I0310 21:48:43.034528       1 publisher.go:186] Finished syncing namespace "azurefile-217" (2.320154ms)
I0310 21:48:43.057211       1 tokens_controller.go:252] syncServiceAccount(azurefile-217/default), service account deleted, removing tokens
I0310 21:48:43.057260       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-217, name default, uid a03496de-655a-4f1d-afd4-93651cf47c98, event type delete
I0310 21:48:43.057436       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-217" (1.7µs)
I0310 21:48:43.070861       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-217" (2.9µs)
... skipping 37 lines ...
I0310 21:48:46.861964       1 namespace_controller.go:185] Namespace has been deleted azurefile-8993
I0310 21:48:46.861989       1 namespace_controller.go:180] Finished syncing namespace "azurefile-8993" (59.701µs)
I0310 21:48:46.872699       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-170
I0310 21:48:46.919488       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-170, name kube-root-ca.crt, uid 9c96aef2-041a-489d-977f-26933c059a70, event type delete
I0310 21:48:46.921738       1 publisher.go:186] Finished syncing namespace "azurefile-170" (2.158247ms)
I0310 21:48:46.926328       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-170, name default-token-64t64, uid 6892d4db-b39e-4e57-9c71-26f726c5b076, event type delete
E0310 21:48:46.941484       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-170/default: secrets "default-token-4kfrz" is forbidden: unable to create new content in namespace azurefile-170 because it is being terminated
I0310 21:48:47.042106       1 tokens_controller.go:252] syncServiceAccount(azurefile-170/default), service account deleted, removing tokens
I0310 21:48:47.042176       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-170, name default, uid d7470b34-c671-4f37-969d-c6790aa3512b, event type delete
I0310 21:48:47.042200       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-170" (1.9µs)
I0310 21:48:47.074004       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-170, estimate: 0, errors: <nil>
I0310 21:48:47.074734       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-170" (2.8µs)
I0310 21:48:47.084499       1 namespace_controller.go:180] Finished syncing namespace "azurefile-170" (215.402449ms)
... skipping 19 lines ...
I0310 21:48:49.377688       1 namespace_controller.go:185] Namespace has been deleted azurefile-4036
I0310 21:48:49.377712       1 namespace_controller.go:180] Finished syncing namespace "azurefile-4036" (47.801µs)
I0310 21:48:49.496033       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8504
I0310 21:48:49.532696       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-8504, name kube-root-ca.crt, uid 5dc6f794-1753-4383-87ef-67197b2fd356, event type delete
I0310 21:48:49.534299       1 publisher.go:186] Finished syncing namespace "azurefile-8504" (1.535134ms)
I0310 21:48:49.585883       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8504, name default-token-t4mqj, uid 7b034027-553f-4e3d-92a4-15db34bb095c, event type delete
E0310 21:48:49.604476       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8504/default: secrets "default-token-csdbn" is forbidden: unable to create new content in namespace azurefile-8504 because it is being terminated
I0310 21:48:49.632276       1 tokens_controller.go:252] syncServiceAccount(azurefile-8504/default), service account deleted, removing tokens
I0310 21:48:49.632320       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8504, name default, uid 1c2b0fd8-7a36-4719-b1a4-52b91859b59e, event type delete
I0310 21:48:49.632376       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8504" (1.9µs)
I0310 21:48:49.663600       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8504" (5.1µs)
I0310 21:48:49.664250       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-8504, estimate: 0, errors: <nil>
I0310 21:48:49.681468       1 namespace_controller.go:180] Finished syncing namespace "azurefile-8504" (188.499155ms)
... skipping 82 lines ...
I0310 21:48:57.378937       1 namespace_controller.go:185] Namespace has been deleted azurefile-7103
I0310 21:48:57.378961       1 namespace_controller.go:180] Finished syncing namespace "azurefile-7103" (47.201µs)
I0310 21:48:57.401659       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9580
I0310 21:48:57.421979       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-9580, name kube-root-ca.crt, uid 03b0edb1-3bd9-426d-af49-628e9ba173a3, event type delete
I0310 21:48:57.424059       1 publisher.go:186] Finished syncing namespace "azurefile-9580" (1.702837ms)
I0310 21:48:57.501743       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-9580, name default-token-8qhvk, uid 4d49679b-c430-4243-b5ed-69a7227871ca, event type delete
E0310 21:48:57.514873       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9580/default: secrets "default-token-xwz4f" is forbidden: unable to create new content in namespace azurefile-9580 because it is being terminated
I0310 21:48:57.515076       1 tokens_controller.go:252] syncServiceAccount(azurefile-9580/default), service account deleted, removing tokens
I0310 21:48:57.515220       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-9580, name default, uid a595763c-3f0a-4bd8-88ea-4693e1243bd8, event type delete
I0310 21:48:57.515320       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9580" (2.1µs)
I0310 21:48:57.526531       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="77.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:50146" resp=200
I0310 21:48:57.576396       1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-9580, estimate: 0, errors: <nil>
I0310 21:48:57.577320       1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9580" (2.7µs)
... skipping 43 lines ...
[AfterSuite] 
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148
------------------------------


Summarizing 6 Failures:
  [FAIL] Dynamic Provisioning [It] should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221
  [FAIL] Dynamic Provisioning [It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221
  [FAIL] Dynamic Provisioning [It] should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221
  [FAIL] Dynamic Provisioning [It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221
  [FAIL] Dynamic Provisioning [It] should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221
  [FAIL] Dynamic Provisioning [It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221

Ran 6 of 39 Specs in 1871.294 seconds
FAIL! -- 0 Passed | 6 Failed | 0 Pending | 33 Skipped
You're using deprecated Ginkgo functionality:
=============================================
  Support for custom reporters has been removed in V2.  Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.4.0

--- FAIL: TestE2E (1871.30s)
FAIL
FAIL	sigs.k8s.io/azurefile-csi-driver/test/e2e	1871.369s
FAIL
make: *** [Makefile:85: e2e-test] Error 1
NAME                              STATUS   ROLES                  AGE   VERSION                          INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
capz-k9b0el-control-plane-gfrn9   Ready    control-plane,master   37m   v1.23.18-rc.0.1+500bcf6c2b6f54   10.0.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1104-azure   containerd://1.6.18
capz-k9b0el-md-0-ffl2x            Ready    <none>                 34m   v1.23.18-rc.0.1+500bcf6c2b6f54   10.1.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1104-azure   containerd://1.6.18
capz-k9b0el-md-0-sv4v2            Ready    <none>                 34m   v1.23.18-rc.0.1+500bcf6c2b6f54   10.1.0.5      <none>        Ubuntu 18.04.6 LTS   5.4.0-1104-azure   containerd://1.6.18
NAMESPACE          NAME                                                      READY   STATUS    RESTARTS   AGE   IP                NODE                              NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-86544bbddb-kknwg                         1/1     Running   0          35m   192.168.186.71    capz-k9b0el-control-plane-gfrn9   <none>           <none>
... skipping 163 lines ...