Error lines from build-log.txt
... skipping 831 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 143 lines ...
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.25.6 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
Unable to connect to the server: dial tcp 20.120.64.237:6443: i/o timeout
capz-jtbghr-control-plane-99vcl NotReady control-plane,master 17s v1.23.18-rc.0.1+500bcf6c2b6f54
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.25.6 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
namespace/calico-system created
Error from server (NotFound): configmaps "kubeadm-config" not found
configmap/kubeadm-config created
Installing Calico CNI via helm
Cluster CIDR is IPv4
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
Release "calico" does not exist. Installing it now.
... skipping 325 lines ...
Mar 15 21:26:36.746: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:38.776: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:40.807: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:42.837: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:44.869: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:46.900: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:48.901: INFO: Unexpected error:
<*errors.errorString | 0xc0005916b0>: {
s: "PersistentVolumeClaims [pvc-qzwrb] not all in phase Bound within 5m0s",
}
Mar 15 21:26:48.901: FAIL: PersistentVolumeClaims [pvc-qzwrb] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0008bfc70, {0x2896668?, 0xc0003e81a0}, 0xc0007514a0, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_cmd_volume_tester.go:41 +0xed
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.3()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:149 +0x5f5
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/15/23 21:26:48.901[0m
[1mSTEP:[0m Destroying namespace "azurefile-8255" for this suite. [38;5;243m03/15/23 21:26:48.901[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.225 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:106[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 15 21:26:36.746: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:38.776: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:40.807: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:42.837: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:44.869: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:46.900: INFO: PersistentVolumeClaim pvc-qzwrb found but phase is Pending instead of Bound.
Mar 15 21:26:48.901: INFO: Unexpected error:
<*errors.errorString | 0xc0005916b0>: {
s: "PersistentVolumeClaims [pvc-qzwrb] not all in phase Bound within 5m0s",
}
Mar 15 21:26:48.901: FAIL: PersistentVolumeClaims [pvc-qzwrb] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0008bfc70, {0x2896668?, 0xc0003e81a0}, 0xc0007514a0, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003fa1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 236 lines ...
Mar 15 21:31:38.877: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:40.909: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:42.940: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:44.973: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:47.005: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:49.037: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:51.037: INFO: Unexpected error:
<*errors.errorString | 0xc0000258d0>: {
s: "PersistentVolumeClaims [pvc-4mwlj] not all in phase Bound within 5m0s",
}
Mar 15 21:31:51.038: FAIL: PersistentVolumeClaims [pvc-4mwlj] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0005edc90, {0x2896668?, 0xc0000ffa00}, 0xc00099f4a0, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_collocated_pod_tester.go:40 +0x153
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.6()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:273 +0x5ed
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/15/23 21:31:51.039[0m
[1mSTEP:[0m Destroying namespace "azurefile-9696" for this suite. [38;5;243m03/15/23 21:31:51.039[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.226 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:224[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 15 21:31:38.877: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:40.909: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:42.940: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:44.973: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:47.005: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:49.037: INFO: PersistentVolumeClaim pvc-4mwlj found but phase is Pending instead of Bound.
Mar 15 21:31:51.037: INFO: Unexpected error:
<*errors.errorString | 0xc0000258d0>: {
s: "PersistentVolumeClaims [pvc-4mwlj] not all in phase Bound within 5m0s",
}
Mar 15 21:31:51.038: FAIL: PersistentVolumeClaims [pvc-4mwlj] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0005edc90, {0x2896668?, 0xc0000ffa00}, 0xc00099f4a0, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003fa1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 166 lines ...
Mar 15 21:36:40.260: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:42.291: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:44.323: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:46.354: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:48.385: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:50.416: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:52.416: INFO: Unexpected error:
<*errors.errorString | 0xc0001229f0>: {
s: "PersistentVolumeClaims [pvc-g5qrt] not all in phase Bound within 5m0s",
}
Mar 15 21:36:52.416: FAIL: PersistentVolumeClaims [pvc-g5qrt] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0005edbe0, {0x2896668?, 0xc0003e9380}, 0xc000751b80, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_read_only_volume_tester.go:48 +0x13c
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.7()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:308 +0x365
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/15/23 21:36:52.417[0m
[1mSTEP:[0m Destroying namespace "azurefile-6810" for this suite. [38;5;243m03/15/23 21:36:52.417[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.355 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:277[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 15 21:36:40.260: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:42.291: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:44.323: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:46.354: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:48.385: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:50.416: INFO: PersistentVolumeClaim pvc-g5qrt found but phase is Pending instead of Bound.
Mar 15 21:36:52.416: INFO: Unexpected error:
<*errors.errorString | 0xc0001229f0>: {
s: "PersistentVolumeClaims [pvc-g5qrt] not all in phase Bound within 5m0s",
}
Mar 15 21:36:52.416: FAIL: PersistentVolumeClaims [pvc-g5qrt] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0005edbe0, {0x2896668?, 0xc0003e9380}, 0xc000751b80, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003fa1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 166 lines ...
Mar 15 21:41:41.665: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:43.696: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:45.727: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:47.758: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:49.790: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:51.822: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:53.822: INFO: Unexpected error:
<*errors.errorString | 0xc00047f0e0>: {
s: "PersistentVolumeClaims [pvc-cgjkq] not all in phase Bound within 5m0s",
}
Mar 15 21:41:53.822: FAIL: PersistentVolumeClaims [pvc-cgjkq] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*PodDetails).SetupDeployment(0xc000b39ea8, {0x2896668?, 0xc000103ba0}, 0xc00081a420, {0x7fd5693f2448, 0xc00003af80}, 0x7fd5a0bcff18?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:185 +0x495
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedDeletePodTest).Run(0xc000b39e98, {0x2896668?, 0xc000103ba0?}, 0x10?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_delete_pod_tester.go:45 +0x55
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.8()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:345 +0x434
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/15/23 21:41:53.823[0m
[1mSTEP:[0m Destroying namespace "azurefile-8836" for this suite. [38;5;243m03/15/23 21:41:53.823[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.404 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:311[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 15 21:41:41.665: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:43.696: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:45.727: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:47.758: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:49.790: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:51.822: INFO: PersistentVolumeClaim pvc-cgjkq found but phase is Pending instead of Bound.
Mar 15 21:41:53.822: INFO: Unexpected error:
<*errors.errorString | 0xc00047f0e0>: {
s: "PersistentVolumeClaims [pvc-cgjkq] not all in phase Bound within 5m0s",
}
Mar 15 21:41:53.822: FAIL: PersistentVolumeClaims [pvc-cgjkq] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*PodDetails).SetupDeployment(0xc000b39ea8, {0x2896668?, 0xc000103ba0}, 0xc00081a420, {0x7fd5693f2448, 0xc00003af80}, 0x7fd5a0bcff18?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:185 +0x495
... skipping 10 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003fa1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 166 lines ...
Mar 15 21:46:43.020: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:45.051: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:47.082: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:49.121: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:51.151: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:53.183: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:55.183: INFO: Unexpected error:
<*errors.errorString | 0xc00002a570>: {
s: "PersistentVolumeClaims [pvc-7svns] not all in phase Bound within 5m0s",
}
Mar 15 21:46:55.184: FAIL: PersistentVolumeClaims [pvc-7svns] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000b39d90, {0x2896668?, 0xc0003e9380}, 0xc0007514a0, {0x7fd5693f2448, 0xc00003af80}, 0xc0005651c0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedReclaimPolicyTest).Run(0xc000b39ef8, {0x2896668, 0xc0003e9380}, 0x7?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_reclaim_policy_tester.go:38 +0xd9
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.9()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:369 +0x285
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/15/23 21:46:55.184[0m
[1mSTEP:[0m Destroying namespace "azurefile-9021" for this suite. [38;5;243m03/15/23 21:46:55.184[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.360 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:348[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 15 21:46:43.020: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:45.051: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:47.082: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:49.121: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:51.151: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:53.183: INFO: PersistentVolumeClaim pvc-7svns found but phase is Pending instead of Bound.
Mar 15 21:46:55.183: INFO: Unexpected error:
<*errors.errorString | 0xc00002a570>: {
s: "PersistentVolumeClaims [pvc-7svns] not all in phase Bound within 5m0s",
}
Mar 15 21:46:55.184: FAIL: PersistentVolumeClaims [pvc-7svns] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000b39d90, {0x2896668?, 0xc0003e9380}, 0xc0007514a0, {0x7fd5693f2448, 0xc00003af80}, 0xc0005651c0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 10 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003fa1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 201 lines ...
Mar 15 21:51:44.648: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:46.679: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:48.710: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:50.742: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:52.772: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:54.813: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:56.814: INFO: Unexpected error:
<*errors.errorString | 0xc000123790>: {
s: "PersistentVolumeClaims [pvc-zpv7n] not all in phase Bound within 5m0s",
}
Mar 15 21:51:56.815: FAIL: PersistentVolumeClaims [pvc-zpv7n] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000a8f788, {0x2896668?, 0xc0000ffa00}, 0xc000578dc0, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_resize_volume_tester.go:64 +0x10c
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.11()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:426 +0x2f5
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/15/23 21:51:56.815[0m
[1mSTEP:[0m Destroying namespace "azurefile-3532" for this suite. [38;5;243m03/15/23 21:51:56.816[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.167 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:397[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 15 21:51:44.648: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:46.679: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:48.710: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:50.742: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:52.772: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:54.813: INFO: PersistentVolumeClaim pvc-zpv7n found but phase is Pending instead of Bound.
Mar 15 21:51:56.814: INFO: Unexpected error:
<*errors.errorString | 0xc000123790>: {
s: "PersistentVolumeClaims [pvc-zpv7n] not all in phase Bound within 5m0s",
}
Mar 15 21:51:56.815: FAIL: PersistentVolumeClaims [pvc-zpv7n] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000a8f788, {0x2896668?, 0xc0000ffa00}, 0xc000578dc0, {0x7fd5693f2448, 0xc00003af80}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003fa1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 873 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use a pre-provisioned volume and mount it by multiple pods [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:117[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/15/23 21:52:07.995[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:158[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/15/23 21:52:08.448[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use existing credentials in k8s cluster [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:186[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/15/23 21:52:08.872[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use provided credentials [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:230[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/15/23 21:52:09.298[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[1m[AfterSuite] [0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m
2023/03/15 21:52:09 ===================controller-manager log=======
print out all nodes status ...
... skipping 1760 lines ...
I0315 21:15:12.386822 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0315 21:15:12.386905 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1678914912\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1678914912\" (2023-03-15 20:15:11 +0000 UTC to 2024-03-14 20:15:11 +0000 UTC (now=2023-03-15 21:15:12.386893094 +0000 UTC))"
I0315 21:15:12.386918 1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0315 21:15:12.387050 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0315 21:15:12.387319 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:15:12.387562 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0315 21:15:15.980867 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0315 21:15:15.981007 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0315 21:15:18.698510 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0315 21:15:18.698923 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0315 21:15:22.858268 1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0315 21:15:22.858523 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-jtbghr-control-plane-99vcl_cafecea8-c3d2-40e6-a93d-050cf3e274ae became leader"
I0315 21:15:22.964794 1 request.go:617] Waited for 97.280982ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apps/v1
I0315 21:15:22.966245 1 controllermanager.go:576] Starting "csrcleaner"
I0315 21:15:22.966372 1 shared_informer.go:240] Waiting for caches to sync for tokens
I0315 21:15:22.966517 1 reflector.go:219] Starting reflector *v1.ServiceAccount (12h38m2.756731006s) from k8s.io/client-go/informers/factory.go:134
... skipping 16 lines ...
I0315 21:15:24.060611 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0315 21:15:24.060618 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0315 21:15:24.060648 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
I0315 21:15:24.060679 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0315 21:15:24.060694 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0315 21:15:24.060704 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0315 21:15:24.060741 1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0315 21:15:24.060763 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0315 21:15:24.060854 1 controllermanager.go:605] Started "attachdetach"
I0315 21:15:24.060866 1 controllermanager.go:576] Starting "ttl-after-finished"
I0315 21:15:24.061018 1 attach_detach_controller.go:328] Starting attach detach controller
I0315 21:15:24.061028 1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0315 21:15:24.066768 1 controllermanager.go:605] Started "ttl-after-finished"
... skipping 148 lines ...
I0315 21:15:25.666315 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0315 21:15:25.668581 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0315 21:15:25.668719 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0315 21:15:25.668811 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0315 21:15:25.668892 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0315 21:15:25.668984 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0315 21:15:25.669067 1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0315 21:15:25.669152 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0315 21:15:25.669280 1 controllermanager.go:605] Started "persistentvolume-binder"
I0315 21:15:25.669430 1 controllermanager.go:576] Starting "pv-protection"
I0315 21:15:25.669368 1 pv_controller_base.go:310] Starting persistent volume controller
I0315 21:15:25.669617 1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0315 21:15:25.813055 1 controllermanager.go:605] Started "pv-protection"
... skipping 260 lines ...
I0315 21:15:27.771767 1 shared_informer.go:247] Caches are synced for TTL after finished
I0315 21:15:27.771829 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0315 21:15:27.771893 1 shared_informer.go:247] Caches are synced for PVC protection
I0315 21:15:27.771953 1 shared_informer.go:247] Caches are synced for service account
I0315 21:15:27.772019 1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0315 21:15:27.772076 1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
W0315 21:15:27.772146 1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-jtbghr-control-plane-99vcl" does not exist
I0315 21:15:27.772207 1 reflector.go:255] Listing and watching *v1beta1.PodSecurityPolicy from k8s.io/client-go/informers/factory.go:134
I0315 21:15:27.772326 1 resource_quota_monitor.go:298] quota monitor not synced: networking.k8s.io/v1, Resource=ingresses
I0315 21:15:27.788971 1 shared_informer.go:270] caches populated
I0315 21:15:27.789140 1 shared_informer.go:247] Caches are synced for ReplicationController
I0315 21:15:27.801670 1 shared_informer.go:270] caches populated
I0315 21:15:27.801681 1 shared_informer.go:247] Caches are synced for deployment
... skipping 88 lines ...
I0315 21:15:28.325936 1 endpoints_controller.go:381] Finished syncing service "kube-system/kube-dns" endpoints. (440.614192ms)
I0315 21:15:28.328421 1 endpointslicemirroring_controller.go:274] syncEndpoints("kube-system/kube-dns")
I0315 21:15:28.328441 1 endpointslicemirroring_controller.go:309] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
I0315 21:15:28.328453 1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (35µs)
I0315 21:15:28.328583 1 serviceaccounts_controller.go:188] Finished syncing namespace "kube-node-lease" (13.652688ms)
I0315 21:15:28.329811 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="528.101881ms"
I0315 21:15:28.329834 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:28.329855 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-15 21:15:28.329846521 +0000 UTC m=+17.800842777"
I0315 21:15:28.330183 1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-15 21:15:28 +0000 UTC - now: 2023-03-15 21:15:28.330180018 +0000 UTC m=+17.801176274]
I0315 21:15:28.340463 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0315 21:15:28.340828 1 serviceaccounts_controller.go:188] Finished syncing namespace "default" (12.226699ms)
I0315 21:15:28.340874 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="11.016009ms"
I0315 21:15:28.341146 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-15 21:15:28.341101528 +0000 UTC m=+17.812097784"
... skipping 530 lines ...
I0315 21:15:45.090214 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-85c7d488df"
I0315 21:15:45.090382 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (80.953389ms)
I0315 21:15:45.090455 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0fcab4040905bb6, ext:34480456918, loc:(*time.Location)(0x72c0b80)}}
I0315 21:15:45.090565 1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-85c7d488df, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0315 21:15:45.091680 1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (49.812124ms)
I0315 21:15:45.101164 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="115.037932ms"
I0315 21:15:45.101266 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:45.101319 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-15 21:15:45.101307468 +0000 UTC m=+34.572303724"
I0315 21:15:45.101671 1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2023-03-15 21:15:45 +0000 UTC - now: 2023-03-15 21:15:45.101667465 +0000 UTC m=+34.572663721]
I0315 21:15:45.122664 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-85c7d488df"
I0315 21:15:45.125024 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (34.569639ms)
I0315 21:15:45.125061 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0fcab4040905bb6, ext:34480456918, loc:(*time.Location)(0x72c0b80)}}
I0315 21:15:45.125115 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (59.199µs)
I0315 21:15:45.145417 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="44.093667ms"
I0315 21:15:45.145448 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-15 21:15:45.145437535 +0000 UTC m=+34.616433791"
I0315 21:15:45.146109 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0315 21:15:45.164738 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="19.284054ms"
I0315 21:15:45.164762 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:45.164791 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-15 21:15:45.164780689 +0000 UTC m=+34.635776945"
I0315 21:15:45.182567 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="17.768966ms"
I0315 21:15:45.182600 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-15 21:15:45.182587754 +0000 UTC m=+34.653584010"
I0315 21:15:45.182601 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0315 21:15:45.182897 1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2023-03-15 21:15:45 +0000 UTC - now: 2023-03-15 21:15:45.182893152 +0000 UTC m=+34.653889408]
I0315 21:15:45.182916 1 progress.go:195] Queueing up deployment "metrics-server" for a progress check after 599s
... skipping 38 lines ...
I0315 21:15:49.152404 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"calico-system/tigera-operator-6bbf97c9cf", timestamp:time.Time{wall:0xc0fcab4149157842, ext:38623398342, loc:(*time.Location)(0x72c0b80)}}
I0315 21:15:49.152443 1 replica_set.go:563] "Too few replicas" replicaSet="calico-system/tigera-operator-6bbf97c9cf" need=1 creating=1
I0315 21:15:49.152577 1 deployment_controller.go:215] "ReplicaSet added" replicaSet="calico-system/tigera-operator-6bbf97c9cf"
I0315 21:15:49.160080 1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/tigera-operator"
I0315 21:15:49.160177 1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-15 21:15:49.152041089 +0000 UTC m=+38.623037345 - now: 2023-03-15 21:15:49.160155226 +0000 UTC m=+38.631151582]
I0315 21:15:49.163484 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="16.170375ms"
I0315 21:15:49.163550 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/tigera-operator" err="Operation cannot be fulfilled on deployments.apps \"tigera-operator\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:49.163573 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-15 21:15:49.1635635 +0000 UTC m=+38.634559756"
I0315 21:15:49.163840 1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-15 21:15:49 +0000 UTC - now: 2023-03-15 21:15:49.163836398 +0000 UTC m=+38.634832654]
I0315 21:15:49.167770 1 controller_utils.go:581] Controller tigera-operator-6bbf97c9cf created pod tigera-operator-6bbf97c9cf-67576
I0315 21:15:49.167810 1 replica_set_utils.go:59] Updating status for : calico-system/tigera-operator-6bbf97c9cf, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0315 21:15:49.168000 1 event.go:294] "Event occurred" object="calico-system/tigera-operator-6bbf97c9cf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: tigera-operator-6bbf97c9cf-67576"
I0315 21:15:49.168108 1 replica_set.go:380] Pod tigera-operator-6bbf97c9cf-67576 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"tigera-operator-6bbf97c9cf-67576", GenerateName:"tigera-operator-6bbf97c9cf-", Namespace:"calico-system", SelfLink:"", UID:"c40dd3be-79b5-4e85-b2b9-0279cd0e9897", ResourceVersion:"633", Generation:0, CreationTimestamp:time.Date(2023, time.March, 15, 21, 15, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"tigera-operator", "name":"tigera-operator", "pod-template-hash":"6bbf97c9cf"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"tigera-operator-6bbf97c9cf", UID:"13f0d2ab-bcb0-4b9a-bac2-0c4934f4e80e", Controller:(*bool)(0xc002020007), BlockOwnerDeletion:(*bool)(0xc002020008)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 15, 21, 15, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001135ce0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001135cf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-nb984", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000380e60), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"tigera-operator", Image:"quay.io/tigera/operator:v1.29.0", Command:[]string{"operator"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource{v1.EnvFromSource{Prefix:"", ConfigMapRef:(*v1.ConfigMapEnvSource)(0xc001135d28), SecretRef:(*v1.SecretEnvSource)(nil)}}, Env:[]v1.EnvVar{v1.EnvVar{Name:"WATCH_NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"POD_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000380fc0)}, v1.EnvVar{Name:"OPERATOR_NAME", Value:"tigera-operator", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TIGERA_OPERATOR_INIT_IMAGE_VERSION", Value:"v1.29.0", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"var-lib-calico", ReadOnly:true, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-nb984", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002020128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirstWithHostNet", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"tigera-operator", DeprecatedServiceAccount:"tigera-operator", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000270690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00202018c), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002020190), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001dae400), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
... skipping 159 lines ...
I0315 21:15:52.554087 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/cloud-node-manager" (1.21029ms)
I0315 21:15:52.556128 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/cloud-controller-manager-687587b686" (23.871417ms)
I0315 21:15:52.556281 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-controller-manager-687587b686", timestamp:time.Time{wall:0xc0fcab421fb9fd67, ext:42003279083, loc:(*time.Location)(0x72c0b80)}}
I0315 21:15:52.555975 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/cloud-controller-manager-687587b686"
I0315 21:15:52.556569 1 replica_set_utils.go:59] Updating status for : kube-system/cloud-controller-manager-687587b686, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0315 21:15:52.562154 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/cloud-controller-manager" duration="39.135899ms"
I0315 21:15:52.562336 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/cloud-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"cloud-controller-manager\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:52.562385 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/cloud-controller-manager" startTime="2023-03-15 21:15:52.562372796 +0000 UTC m=+42.033369052"
I0315 21:15:52.562460 1 replica_set.go:443] Pod cloud-controller-manager-687587b686-r7qq5 updated, objectMeta {Name:cloud-controller-manager-687587b686-r7qq5 GenerateName:cloud-controller-manager-687587b686- Namespace:kube-system SelfLink: UID:d84ee7ef-97f7-471c-ae9a-c05b90bd3507 ResourceVersion:682 Generation:0 CreationTimestamp:2023-03-15 21:15:52 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:cloud-controller-manager pod-template-hash:687587b686 tier:control-plane] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:cloud-controller-manager-687587b686 UID:43539c4f-86a3-4c72-9c00-d679608e783b Controller:0xc0021acfa7 BlockOwnerDeletion:0xc0021acfa8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:component":{},"f:pod-template-hash":{},"f:tier":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43539c4f-86a3-4c72-9c00-d679608e783b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"cloud-controller-manager\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/ssl\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/lib/waagent/ManagedIdentity-Settings\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:topologySpreadConstraints":{".":{},"k:{\"topologyKey\":\"kubernetes.io/hostname\",\"whenUnsatisfiable\":\"DoNotSchedule\"}":{".":{},"f:labelSelector":{},"f:maxSkew":{},"f:topologyKey":{},"f:whenUnsatisfiable":{}}},"f:volumes":{".":{},"k:{\"name\":\"etc-kubernetes\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"msi\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"ssl-mount\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:}]} -> {Name:cloud-controller-manager-687587b686-r7qq5 GenerateName:cloud-controller-manager-687587b686- Namespace:kube-system SelfLink: UID:d84ee7ef-97f7-471c-ae9a-c05b90bd3507 ResourceVersion:687 Generation:0 CreationTimestamp:2023-03-15 21:15:52 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:cloud-controller-manager pod-template-hash:687587b686 tier:control-plane] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:cloud-controller-manager-687587b686 UID:43539c4f-86a3-4c72-9c00-d679608e783b Controller:0xc002340317 BlockOwnerDeletion:0xc002340318}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:component":{},"f:pod-template-hash":{},"f:tier":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43539c4f-86a3-4c72-9c00-d679608e783b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"cloud-controller-manager\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/ssl\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/lib/waagent/ManagedIdentity-Settings\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:topologySpreadConstraints":{".":{},"k:{\"topologyKey\":\"kubernetes.io/hostname\",\"whenUnsatisfiable\":\"DoNotSchedule\"}":{".":{},"f:labelSelector":{},"f:maxSkew":{},"f:topologyKey":{},"f:whenUnsatisfiable":{}}},"f:volumes":{".":{},"k:{\"name\":\"etc-kubernetes\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"msi\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"ssl-mount\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-15 21:15:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0315 21:15:52.562725 1 disruption.go:427] updatePod called on pod "cloud-controller-manager-687587b686-r7qq5"
I0315 21:15:52.562827 1 disruption.go:490] No PodDisruptionBudgets found for pod cloud-controller-manager-687587b686-r7qq5, PodDisruptionBudget controller will avoid syncing.
I0315 21:15:52.562923 1 disruption.go:430] No matching pdb for pod "cloud-controller-manager-687587b686-r7qq5"
I0315 21:15:52.562954 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/cloud-controller-manager-687587b686-r7qq5" podUID=d84ee7ef-97f7-471c-ae9a-c05b90bd3507
... skipping 92 lines ...
I0315 21:15:58.387122 1 resource_quota_monitor.go:181] QuotaMonitor using a shared informer for resource "crd.projectcalico.org/v1, Resource=networkpolicies"
I0315 21:15:58.387163 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.crd.projectcalico.org
I0315 21:15:58.387274 1 resource_quota_monitor.go:248] quota synced monitors; added 2, kept 28, removed 0
I0315 21:15:58.387309 1 resource_quota_monitor.go:280] QuotaMonitor started 2 new monitors, 30 currently running
I0315 21:15:58.387317 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0315 21:15:58.387333 1 resource_quota_monitor.go:298] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
W0315 21:15:58.387546 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0315 21:15:58.387689 1 garbagecollector.go:210] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets operator.tigera.io/v1, Resource=apiservers operator.tigera.io/v1, Resource=imagesets operator.tigera.io/v1, Resource=installations operator.tigera.io/v1, Resource=tigerastatuses], removed: []
I0315 21:15:58.387702 1 garbagecollector.go:216] reset restmapper
I0315 21:15:58.387849 1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (12h9m8.584403085s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0315 21:15:58.387954 1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0315 21:15:58.388191 1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (15h23m56.622270651s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0315 21:15:58.388270 1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
... skipping 87 lines ...
I0315 21:15:58.662264 1 replica_set.go:380] Pod calico-typha-56698768cd-z9vjp created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-typha-56698768cd-z9vjp", GenerateName:"calico-typha-56698768cd-", Namespace:"calico-system", SelfLink:"", UID:"cbf6e56d-c53b-4960-8b0b-a43d7f65927e", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2023, time.March, 15, 21, 15, 58, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-typha", "k8s-app":"calico-typha", "pod-template-hash":"56698768cd"}, Annotations:map[string]string{"hash.operator.tigera.io/system":"bb4746872201725da2dea19756c475aa67d9c1e9", "hash.operator.tigera.io/tigera-ca-private":"4c7c072fa2a1f14615e22dbff1e74913f2ac4236", "hash.operator.tigera.io/typha-certs":"fcf8e94f8c975fff0cb2fe022d34025d05962585"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-typha-56698768cd", UID:"2b3e9688-1181-4659-9b2c-2c48c8e60b86", Controller:(*bool)(0xc00181964e), BlockOwnerDeletion:(*bool)(0xc00181964f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 15, 21, 15, 58, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001323bf0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"tigera-ca-bundle", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0020acec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"typha-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020acf00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-b76w8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001177100), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-typha", Image:"docker.io/calico/typha:v3.25.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"calico-typha", HostPort:5473, ContainerPort:5473, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"TYPHA_LOGSEVERITYSCREEN", Value:"info", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_LOGFILEPATH", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_LOGSEVERITYSYS", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_CONNECTIONREBALANCINGMODE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_DATASTORETYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_HEALTHPORT", Value:"9098", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_K8SNAMESPACE", Value:"calico-system", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_CAFILE", Value:"/etc/pki/tls/certs/tigera-ca-bundle.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_SERVERCERTFILE", Value:"/typha-certs/tls.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_SERVERKEYFILE", Value:"/typha-certs/tls.key", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_FIPSMODEENABLED", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_SHUTDOWNTIMEOUTSECS", Value:"300", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"TYPHA_CLIENTCN", Value:"typha-client", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.96.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"tigera-ca-bundle", ReadOnly:true, MountPath:"/etc/pki/tls/certs/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"typha-certs", ReadOnly:true, MountPath:"/typha-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-b76w8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0020acfc0), ReadinessProbe:(*v1.Probe)(0xc0020ad000), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001819858), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-typha", DeprecatedServiceAccount:"calico-typha", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000594700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001323c50), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0018198ec), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0018198f0), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001c93930), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0315 21:15:58.662518 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-56698768cd", timestamp:time.Time{wall:0xc0fcab43a6130d8d, ext:48109779117, loc:(*time.Location)(0x72c0b80)}}
I0315 21:15:58.664268 1 disruption.go:415] addPod called on pod "calico-typha-56698768cd-z9vjp"
I0315 21:15:58.665275 1 disruption.go:421] addPod "calico-typha-56698768cd-z9vjp" -> PDB "calico-typha"
I0315 21:15:58.664483 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="calico-system/calico-typha-56698768cd-z9vjp" podUID=cbf6e56d-c53b-4960-8b0b-a43d7f65927e
I0315 21:15:58.676810 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="50.103519ms"
I0315 21:15:58.677085 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-typha" err="Operation cannot be fulfilled on deployments.apps \"calico-typha\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:58.677241 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-15 21:15:58.677226346 +0000 UTC m=+48.148222602"
I0315 21:15:58.677713 1 deployment_util.go:775] Deployment "calico-typha" timed out (false) [last progress check: 2023-03-15 21:15:58 +0000 UTC - now: 2023-03-15 21:15:58.677708443 +0000 UTC m=+48.148704799]
I0315 21:15:58.686849 1 replica_set.go:443] Pod calico-typha-56698768cd-z9vjp updated, objectMeta {Name:calico-typha-56698768cd-z9vjp GenerateName:calico-typha-56698768cd- Namespace:calico-system SelfLink: UID:cbf6e56d-c53b-4960-8b0b-a43d7f65927e ResourceVersion:738 Generation:0 CreationTimestamp:2023-03-15 21:15:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:56698768cd] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:4c7c072fa2a1f14615e22dbff1e74913f2ac4236 hash.operator.tigera.io/typha-certs:fcf8e94f8c975fff0cb2fe022d34025d05962585] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-56698768cd UID:2b3e9688-1181-4659-9b2c-2c48c8e60b86 Controller:0xc00181964e BlockOwnerDeletion:0xc00181964f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b3e9688-1181-4659-9b2c-2c48c8e60b86\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]} -> {Name:calico-typha-56698768cd-z9vjp GenerateName:calico-typha-56698768cd- Namespace:calico-system SelfLink: UID:cbf6e56d-c53b-4960-8b0b-a43d7f65927e ResourceVersion:741 Generation:0 CreationTimestamp:2023-03-15 21:15:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:56698768cd] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:4c7c072fa2a1f14615e22dbff1e74913f2ac4236 hash.operator.tigera.io/typha-certs:fcf8e94f8c975fff0cb2fe022d34025d05962585] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-56698768cd UID:2b3e9688-1181-4659-9b2c-2c48c8e60b86 Controller:0xc001d9a0e7 BlockOwnerDeletion:0xc001d9a0e8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b3e9688-1181-4659-9b2c-2c48c8e60b86\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]}.
I0315 21:15:58.686972 1 taint_manager.go:401] "Noticed pod update" pod="calico-system/calico-typha-56698768cd-z9vjp"
I0315 21:15:58.686984 1 taint_manager.go:362] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="calico-system/calico-typha-56698768cd-z9vjp"
I0315 21:15:58.686994 1 disruption.go:427] updatePod called on pod "calico-typha-56698768cd-z9vjp"
... skipping 156 lines ...
I0315 21:15:59.203057 1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/calico-kube-controllers"
I0315 21:15:59.219167 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-kube-controllers-fb49b9cf7"
I0315 21:15:59.219287 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-kube-controllers-fb49b9cf7" (26.72585ms)
I0315 21:15:59.219307 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-kube-controllers-fb49b9cf7", timestamp:time.Time{wall:0xc0fcab43cb7aab07, ext:48663584807, loc:(*time.Location)(0x72c0b80)}}
I0315 21:15:59.219357 1 replica_set_utils.go:59] Updating status for : calico-system/calico-kube-controllers-fb49b9cf7, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0315 21:15:59.219533 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-kube-controllers" duration="36.537494ms"
I0315 21:15:59.219547 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:15:59.219568 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-kube-controllers" startTime="2023-03-15 21:15:59.219557199 +0000 UTC m=+48.690553455"
I0315 21:15:59.219797 1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2023-03-15 21:15:59 +0000 UTC - now: 2023-03-15 21:15:59.219793098 +0000 UTC m=+48.690789354]
I0315 21:15:59.229488 1 replica_set.go:443] Pod calico-kube-controllers-fb49b9cf7-76z7x updated, objectMeta {Name:calico-kube-controllers-fb49b9cf7-76z7x GenerateName:calico-kube-controllers-fb49b9cf7- Namespace:calico-system SelfLink: UID:bace2ef2-c3ff-4b2a-9d13-4bb31bca220b ResourceVersion:788 Generation:0 CreationTimestamp:2023-03-15 21:15:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb49b9cf7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-fb49b9cf7 UID:539c0f2e-b957-463f-932e-1dec9efef88c Controller:0xc00262e7a0 BlockOwnerDeletion:0xc00262e7a1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"539c0f2e-b957-463f-932e-1dec9efef88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"FIPS_MODE_ENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBE_CONTROLLERS_CONFIG_NAME\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-fb49b9cf7-76z7x GenerateName:calico-kube-controllers-fb49b9cf7- Namespace:calico-system SelfLink: UID:bace2ef2-c3ff-4b2a-9d13-4bb31bca220b ResourceVersion:794 Generation:0 CreationTimestamp:2023-03-15 21:15:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb49b9cf7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-fb49b9cf7 UID:539c0f2e-b957-463f-932e-1dec9efef88c Controller:0xc0026b5cf0 BlockOwnerDeletion:0xc0026b5cf1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"539c0f2e-b957-463f-932e-1dec9efef88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"FIPS_MODE_ENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBE_CONTROLLERS_CONFIG_NAME\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-15 21:15:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0315 21:15:59.229589 1 disruption.go:427] updatePod called on pod "calico-kube-controllers-fb49b9cf7-76z7x"
I0315 21:15:59.229610 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-fb49b9cf7-76z7x, PodDisruptionBudget controller will avoid syncing.
I0315 21:15:59.229616 1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-fb49b9cf7-76z7x"
... skipping 19 lines ...
I0315 21:16:02.455281 1 replica_set.go:443] Pod calico-typha-56698768cd-z9vjp updated, objectMeta {Name:calico-typha-56698768cd-z9vjp GenerateName:calico-typha-56698768cd- Namespace:calico-system SelfLink: UID:cbf6e56d-c53b-4960-8b0b-a43d7f65927e ResourceVersion:753 Generation:0 CreationTimestamp:2023-03-15 21:15:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:56698768cd] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:4c7c072fa2a1f14615e22dbff1e74913f2ac4236 hash.operator.tigera.io/typha-certs:fcf8e94f8c975fff0cb2fe022d34025d05962585] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-56698768cd UID:2b3e9688-1181-4659-9b2c-2c48c8e60b86 Controller:0xc001ed4fae BlockOwnerDeletion:0xc001ed4faf}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b3e9688-1181-4659-9b2c-2c48c8e60b86\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-typha-56698768cd-z9vjp GenerateName:calico-typha-56698768cd- Namespace:calico-system SelfLink: UID:cbf6e56d-c53b-4960-8b0b-a43d7f65927e ResourceVersion:816 Generation:0 CreationTimestamp:2023-03-15 21:15:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:56698768cd] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:4c7c072fa2a1f14615e22dbff1e74913f2ac4236 hash.operator.tigera.io/typha-certs:fcf8e94f8c975fff0cb2fe022d34025d05962585] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-56698768cd UID:2b3e9688-1181-4659-9b2c-2c48c8e60b86 Controller:0xc002743027 BlockOwnerDeletion:0xc002743028}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b3e9688-1181-4659-9b2c-2c48c8e60b86\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-15 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0315 21:16:02.455577 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-56698768cd", timestamp:time.Time{wall:0xc0fcab43a6130d8d, ext:48109779117, loc:(*time.Location)(0x72c0b80)}}
I0315 21:16:02.455659 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-56698768cd" (87.499µs)
I0315 21:16:02.455689 1 disruption.go:427] updatePod called on pod "calico-typha-56698768cd-z9vjp"
I0315 21:16:02.455701 1 disruption.go:433] updatePod "calico-typha-56698768cd-z9vjp" -> PDB "calico-typha"
I0315 21:16:02.455735 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (13.1µs)
E0315 21:16:02.455807 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0315 21:16:02.455814 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0315 21:16:02.455829 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0315 21:16:02.456057 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0315 21:16:02.456063 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0315 21:16:02.456074 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0315 21:16:02.456325 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0315 21:16:02.456331 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0315 21:16:02.456342 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0315 21:16:02.469085 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0315 21:16:02.469100 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0315 21:16:02.469116 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0315 21:16:02.469306 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0315 21:16:02.469312 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0315 21:16:02.469321 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0315 21:16:02.469502 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0315 21:16:02.469507 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0315 21:16:02.469516 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0315 21:16:02.469657 1 endpoints_controller.go:551] Update endpoints for calico-system/calico-typha, ready: 1 not ready: 0
I0315 21:16:02.469795 1 disruption.go:427] updatePod called on pod "calico-typha-56698768cd-z9vjp"
I0315 21:16:02.469743 1 replica_set.go:443] Pod calico-typha-56698768cd-z9vjp updated, objectMeta {Name:calico-typha-56698768cd-z9vjp GenerateName:calico-typha-56698768cd- Namespace:calico-system SelfLink: UID:cbf6e56d-c53b-4960-8b0b-a43d7f65927e ResourceVersion:816 Generation:0 CreationTimestamp:2023-03-15 21:15:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:56698768cd] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:4c7c072fa2a1f14615e22dbff1e74913f2ac4236 hash.operator.tigera.io/typha-certs:fcf8e94f8c975fff0cb2fe022d34025d05962585] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-56698768cd UID:2b3e9688-1181-4659-9b2c-2c48c8e60b86 Controller:0xc002743027 BlockOwnerDeletion:0xc002743028}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b3e9688-1181-4659-9b2c-2c48c8e60b86\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-15 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-typha-56698768cd-z9vjp GenerateName:calico-typha-56698768cd- Namespace:calico-system SelfLink: UID:cbf6e56d-c53b-4960-8b0b-a43d7f65927e ResourceVersion:817 Generation:0 CreationTimestamp:2023-03-15 21:15:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:56698768cd] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:4c7c072fa2a1f14615e22dbff1e74913f2ac4236 hash.operator.tigera.io/typha-certs:fcf8e94f8c975fff0cb2fe022d34025d05962585] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-56698768cd UID:2b3e9688-1181-4659-9b2c-2c48c8e60b86 Controller:0xc002743a07 BlockOwnerDeletion:0xc002743a08}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b3e9688-1181-4659-9b2c-2c48c8e60b86\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-15 21:16:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0315 21:16:02.469816 1 disruption.go:433] updatePod "calico-typha-56698768cd-z9vjp" -> PDB "calico-typha"
I0315 21:16:02.469828 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-56698768cd", timestamp:time.Time{wall:0xc0fcab43a6130d8d, ext:48109779117, loc:(*time.Location)(0x72c0b80)}}
I0315 21:16:02.469877 1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-56698768cd, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->1, availableReplicas 0->1, sequence No: 1->1
... skipping 95 lines ...
I0315 21:16:27.883207 1 gc_controller.go:161] GC'ing orphaned
I0315 21:16:27.883227 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:16:27.885391 1 pv_controller_base.go:556] resyncing PV controller
E0315 21:16:28.504330 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0315 21:16:28.504382 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:16:28.809541 1 node_lifecycle_controller.go:868] Node capz-jtbghr-control-plane-99vcl is NotReady as of 2023-03-15 21:16:28.809525387 +0000 UTC m=+78.280521643. Adding it to the Taint queue.
W0315 21:16:28.836900 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0315 21:16:33.665177 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.099µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46022" resp=200
I0315 21:16:33.809681 1 node_lifecycle_controller.go:868] Node capz-jtbghr-control-plane-99vcl is NotReady as of 2023-03-15 21:16:33.809665268 +0000 UTC m=+83.280661624. Adding it to the Taint queue.
I0315 21:16:33.988199 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-control-plane-99vcl"
I0315 21:16:34.037390 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-control-plane-99vcl"
I0315 21:16:34.135188 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-control-plane-99vcl"
I0315 21:16:34.135380 1 controller_utils.go:209] "Added taint to node" taint=[] node="capz-jtbghr-control-plane-99vcl"
... skipping 144 lines ...
I0315 21:16:36.674894 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fcab4d28378d2b, ext:86145725615, loc:(*time.Location)(0x72c0b80)}}
I0315 21:16:36.674961 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fcab4d283b10c5, ext:86145955813, loc:(*time.Location)(0x72c0b80)}}
I0315 21:16:36.674996 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0315 21:16:36.675055 1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0315 21:16:36.675093 1 daemon_controller.go:1112] Updating daemon set status
I0315 21:16:36.675149 1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/calico-node" (1.270689ms)
I0315 21:16:38.810632 1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-jtbghr-control-plane-99vcl transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-15 21:16:03 +0000 UTC,LastTransitionTime:2023-03-15 21:15:07 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-15 21:16:34 +0000 UTC,LastTransitionTime:2023-03-15 21:16:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0315 21:16:38.810738 1 node_lifecycle_controller.go:1046] Node capz-jtbghr-control-plane-99vcl ReadyCondition updated. Updating timestamp.
I0315 21:16:38.810783 1 node_lifecycle_controller.go:892] Node capz-jtbghr-control-plane-99vcl is healthy again, removing all taints
I0315 21:16:38.810860 1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0315 21:16:39.636000 1 disruption.go:427] updatePod called on pod "calico-node-cwqvl"
I0315 21:16:39.636034 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-cwqvl, PodDisruptionBudget controller will avoid syncing.
I0315 21:16:39.636039 1 disruption.go:430] No matching pdb for pod "calico-node-cwqvl"
... skipping 201 lines ...
I0315 21:16:47.800996 1 controller_utils.go:206] Controller calico-apiserver/calico-apiserver-5db7789c6c either never recorded expectations, or the ttl expired.
I0315 21:16:47.801068 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"calico-apiserver/calico-apiserver-5db7789c6c", timestamp:time.Time{wall:0xc0fcab4fefbf4bb1, ext:97272062261, loc:(*time.Location)(0x72c0b80)}}
I0315 21:16:47.801130 1 replica_set.go:563] "Too few replicas" replicaSet="calico-apiserver/calico-apiserver-5db7789c6c" need=2 creating=2
I0315 21:16:47.812790 1 deployment_controller.go:176] "Updating deployment" deployment="calico-apiserver/calico-apiserver"
I0315 21:16:47.813089 1 deployment_util.go:775] Deployment "calico-apiserver" timed out (false) [last progress check: 2023-03-15 21:16:47.799894908 +0000 UTC m=+97.270891164 - now: 2023-03-15 21:16:47.813083675 +0000 UTC m=+97.284079931]
I0315 21:16:47.824245 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-apiserver/calico-apiserver" duration="36.172809ms"
I0315 21:16:47.824364 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-apiserver/calico-apiserver" err="Operation cannot be fulfilled on deployments.apps \"calico-apiserver\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:16:47.824595 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-apiserver/calico-apiserver" startTime="2023-03-15 21:16:47.824582346 +0000 UTC m=+97.295578602"
I0315 21:16:47.825061 1 deployment_util.go:775] Deployment "calico-apiserver" timed out (false) [last progress check: 2023-03-15 21:16:47 +0000 UTC - now: 2023-03-15 21:16:47.825056745 +0000 UTC m=+97.296053001]
I0315 21:16:47.825430 1 controller_utils.go:581] Controller calico-apiserver-5db7789c6c created pod calico-apiserver-5db7789c6c-5zrb2
I0315 21:16:47.825785 1 event.go:294] "Event occurred" object="calico-apiserver/calico-apiserver-5db7789c6c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-apiserver-5db7789c6c-5zrb2"
I0315 21:16:47.826053 1 endpoints_controller.go:551] Update endpoints for calico-apiserver/calico-api, ready: 0 not ready: 0
I0315 21:16:47.826529 1 replica_set.go:380] Pod calico-apiserver-5db7789c6c-5zrb2 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-apiserver-5db7789c6c-5zrb2", GenerateName:"calico-apiserver-5db7789c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f35c6702-5b4c-4c70-8fda-c8190a94e552", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2023, time.March, 15, 21, 16, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db7789c6c"}, Annotations:map[string]string{"hash.operator.tigera.io/calico-apiserver-certs":"17b2b0749ffbb7248cf5d019f4cadc475f38247b"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-apiserver-5db7789c6c", UID:"693dbe9c-6ef3-47c8-bccc-0267f7d6aaf5", Controller:(*bool)(0xc0027e983e), BlockOwnerDeletion:(*bool)(0xc0027e983f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 15, 21, 16, 47, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ae2150), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"calico-apiserver-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0029996c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-rzcjs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002a928e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-apiserver", Image:"docker.io/calico/apiserver:v3.25.0", Command:[]string(nil), Args:[]string{"--secure-port=5443", "--tls-private-key-file=/calico-apiserver-certs/tls.key", "--tls-cert-file=/calico-apiserver-certs/tls.crt"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.96.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"MULTI_INTERFACE_MODE", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"calico-apiserver-certs", ReadOnly:true, MountPath:"/calico-apiserver-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rzcjs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002999780), ReadinessProbe:(*v1.Probe)(0xc0029997c0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002aac7e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027e9958), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-apiserver", DeprecatedServiceAccount:"calico-apiserver", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aa9e30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc002ae2198), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027e9a30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027e9a50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027e9a58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027e9a5c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002aec070), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
... skipping 135 lines ...
I0315 21:16:56.661475 1 endpointslice_controller.go:319] Finished syncing service "calico-apiserver/calico-api" endpoint slices. (100.399µs)
I0315 21:16:57.484176 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-control-plane-99vcl"
I0315 21:16:57.715770 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:16:57.888255 1 pv_controller_base.go:556] resyncing PV controller
E0315 21:16:58.515187 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request, projectcalico.org/v3: the server is currently unable to handle the request
I0315 21:16:58.515242 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0315 21:16:58.856237 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request projectcalico.org/v3:the server is currently unable to handle the request]
I0315 21:17:03.668167 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.6µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:36758" resp=200
I0315 21:17:04.664913 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-control-plane-99vcl"
I0315 21:17:06.057719 1 endpoints_controller.go:551] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0
I0315 21:17:06.058205 1 replica_set.go:443] Pod metrics-server-85c7d488df-7hk82 updated, objectMeta {Name:metrics-server-85c7d488df-7hk82 GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:90e2a656-8eac-4319-b26d-319c63b95220 ResourceVersion:1034 Generation:0 CreationTimestamp:2023-03-15 21:15:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[cni.projectcalico.org/containerID:0d9157382211fe5825b52fe5eacb0057356e376030690e1a88ba5adc2a6eb296 cni.projectcalico.org/podIP:192.168.108.196/32 cni.projectcalico.org/podIPs:192.168.108.196/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:bd3b114d-9b01-4857-add1-5e899da24452 Controller:0xc00285814e BlockOwnerDeletion:0xc00285814f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd3b114d-9b01-4857-add1-5e899da24452\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-15 21:15:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-15 21:16:40 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-15 21:16:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.108.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-85c7d488df-7hk82 GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:90e2a656-8eac-4319-b26d-319c63b95220 ResourceVersion:1164 Generation:0 CreationTimestamp:2023-03-15 21:15:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[cni.projectcalico.org/containerID:0d9157382211fe5825b52fe5eacb0057356e376030690e1a88ba5adc2a6eb296 cni.projectcalico.org/podIP:192.168.108.196/32 cni.projectcalico.org/podIPs:192.168.108.196/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:bd3b114d-9b01-4857-add1-5e899da24452 Controller:0xc0030c0e37 BlockOwnerDeletion:0xc0030c0e38}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:15:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd3b114d-9b01-4857-add1-5e899da24452\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-15 21:15:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-15 21:16:40 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-15 21:17:06 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.108.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0315 21:17:06.058592 1 disruption.go:427] updatePod called on pod "metrics-server-85c7d488df-7hk82"
I0315 21:17:06.058334 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0fcab4040905bb6, ext:34480456918, loc:(*time.Location)(0x72c0b80)}}
... skipping 186 lines ...
I0315 21:17:43.763791 1 certificate_controller.go:173] Finished syncing certificate request "csr-t7f6z" (200ns)
I0315 21:17:47.886518 1 gc_controller.go:161] GC'ing orphaned
I0315 21:17:47.886543 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:17:48.858722 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-jtbghr-md-0-vnk86}
I0315 21:17:48.858754 1 taint_manager.go:441] "Updating known taints on node" node="capz-jtbghr-md-0-vnk86" taints=[]
I0315 21:17:48.858867 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-md-0-vnk86"
W0315 21:17:48.858886 1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-jtbghr-md-0-vnk86" does not exist
I0315 21:17:48.859879 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fcab405b281ef1, ext:34926610449, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:48.859998 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fcab42dac99d03, ext:44920416803, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:48.860160 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0fcab5f3344f29a, ext:158331152826, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:48.860234 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [capz-jtbghr-md-0-vnk86], creating 1
I0315 21:17:48.860356 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fcab5f3347eea8, ext:158331348424, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:48.860435 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-jtbghr-md-0-vnk86], creating 1
... skipping 238 lines ...
I0315 21:17:53.892248 1 taint_manager.go:362] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="kube-system/cloud-node-manager-c69z4"
I0315 21:17:53.892327 1 taint_manager.go:362] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="kube-system/kube-proxy-8wggp"
I0315 21:17:53.892342 1 taint_manager.go:362] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="calico-system/calico-node-fl44g"
I0315 21:17:54.092747 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-jtbghr-md-0-vnl4c}
I0315 21:17:54.092777 1 taint_manager.go:441] "Updating known taints on node" node="capz-jtbghr-md-0-vnl4c" taints=[]
I0315 21:17:54.092819 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-md-0-vnl4c"
W0315 21:17:54.092835 1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-jtbghr-md-0-vnl4c" does not exist
I0315 21:17:54.093579 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fcab600823dfde, ext:161607565054, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:54.095394 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fcab6038c5bced, ext:162423479309, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:54.095982 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0fcab6085b887b1, ext:163566976097, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:54.096087 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-jtbghr-md-0-vnl4c], creating 1
I0315 21:17:54.095925 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fcab6085b79c25, ext:163566915397, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:54.096394 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-jtbghr-md-0-vnl4c], creating 1
... skipping 180 lines ...
I0315 21:17:56.801191 1 taint_manager.go:401] "Noticed pod update" pod="calico-system/calico-typha-56698768cd-dgq6h"
I0315 21:17:56.801290 1 disruption.go:427] updatePod called on pod "calico-typha-56698768cd-dgq6h"
I0315 21:17:56.801381 1 disruption.go:433] updatePod "calico-typha-56698768cd-dgq6h" -> PDB "calico-typha"
I0315 21:17:56.812950 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (21.937048ms)
I0315 21:17:56.813487 1 disruption.go:391] update DB "calico-typha"
I0315 21:17:56.813721 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="26.349618ms"
I0315 21:17:56.813816 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-typha" err="Operation cannot be fulfilled on deployments.apps \"calico-typha\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:17:56.813920 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-15 21:17:56.813907927 +0000 UTC m=+166.284904183"
I0315 21:17:56.814481 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-56698768cd" (50.774849ms)
I0315 21:17:56.815010 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-56698768cd", timestamp:time.Time{wall:0xc0fcab612d87118d, ext:166234822929, loc:(*time.Location)(0x72c0b80)}}
I0315 21:17:56.815277 1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-56698768cd, replicas 1->2 (need 2), fullyLabeledReplicas 1->2, readyReplicas 1->1, availableReplicas 1->1, sequence No: 1->2
I0315 21:17:56.815615 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-typha-56698768cd"
I0315 21:17:56.831523 1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/calico-typha"
I0315 21:17:56.831835 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="17.916376ms"
I0315 21:17:56.831932 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (18.84017ms)
E0315 21:17:56.831949 1 disruption.go:534] Error syncing PodDisruptionBudget calico-system/calico-typha, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy "calico-typha": the object has been modified; please apply your changes to the latest version and try again
I0315 21:17:56.831991 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (27.599µs)
I0315 21:17:56.832074 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-15 21:17:56.831930902 +0000 UTC m=+166.302927258"
I0315 21:17:56.832623 1 progress.go:195] Queueing up deployment "calico-typha" for a progress check after 485s
I0315 21:17:56.832820 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="879.894µs"
I0315 21:17:56.836887 1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-56698768cd, replicas 1->2 (need 2), fullyLabeledReplicas 1->2, readyReplicas 1->1, availableReplicas 1->1, sequence No: 2->2
I0315 21:17:56.837016 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (19.8µs)
... skipping 340 lines ...
I0315 21:18:19.997603 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/csi-node-driver", timestamp:time.Time{wall:0xc0fcab66fb763041, ext:189468597601, loc:(*time.Location)(0x72c0b80)}}
I0315 21:18:19.997609 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set csi-node-driver: [], creating 0
I0315 21:18:19.997622 1 daemon_controller.go:1029] Pods to delete for daemon set csi-node-driver: [], deleting 0
I0315 21:18:19.997632 1 daemon_controller.go:1112] Updating daemon set status
I0315 21:18:19.997657 1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/csi-node-driver" (610.995µs)
I0315 21:18:23.665736 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="65µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:33958" resp=200
I0315 21:18:23.827313 1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-jtbghr-md-0-vnk86 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-15 21:17:59 +0000 UTC,LastTransitionTime:2023-03-15 21:17:48 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-15 21:18:19 +0000 UTC,LastTransitionTime:2023-03-15 21:18:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0315 21:18:23.827385 1 node_lifecycle_controller.go:1046] Node capz-jtbghr-md-0-vnk86 ReadyCondition updated. Updating timestamp.
I0315 21:18:23.853093 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-md-0-vnk86"
I0315 21:18:23.853407 1 node_lifecycle_controller.go:892] Node capz-jtbghr-md-0-vnk86 is healthy again, removing all taints
I0315 21:18:23.853492 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-jtbghr-md-0-vnk86}
I0315 21:18:23.854106 1 taint_manager.go:441] "Updating known taints on node" node="capz-jtbghr-md-0-vnk86" taints=[]
I0315 21:18:23.854199 1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-jtbghr-md-0-vnk86"
... skipping 72 lines ...
I0315 21:18:28.016316 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0fcab6900f8ef23, ext:197487310403, loc:(*time.Location)(0x72c0b80)}}
I0315 21:18:28.016363 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0315 21:18:28.016408 1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0315 21:18:28.016461 1 daemon_controller.go:1112] Updating daemon set status
I0315 21:18:28.016502 1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/calico-node" (1.625188ms)
I0315 21:18:28.661452 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:18:28.855100 1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-jtbghr-md-0-vnl4c transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-15 21:18:04 +0000 UTC,LastTransitionTime:2023-03-15 21:17:54 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-15 21:18:25 +0000 UTC,LastTransitionTime:2023-03-15 21:18:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0315 21:18:28.855179 1 node_lifecycle_controller.go:1046] Node capz-jtbghr-md-0-vnl4c ReadyCondition updated. Updating timestamp.
I0315 21:18:28.877969 1 node_lifecycle_controller.go:892] Node capz-jtbghr-md-0-vnl4c is healthy again, removing all taints
I0315 21:18:28.878289 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-jtbghr-md-0-vnl4c}
I0315 21:18:28.878302 1 taint_manager.go:441] "Updating known taints on node" node="capz-jtbghr-md-0-vnl4c" taints=[]
I0315 21:18:28.878314 1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-jtbghr-md-0-vnl4c"
I0315 21:18:28.878691 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-md-0-vnl4c"
... skipping 205 lines ...
I0315 21:18:54.447962 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0fcab6f9ab34fb7, ext:223918955223, loc:(*time.Location)(0x72c0b80)}}
I0315 21:18:54.448051 1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7b7f546c46" need=2 creating=2
I0315 21:18:54.448305 1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7b7f546c46 to 2"
I0315 21:18:54.448466 1 deployment_controller.go:215] "ReplicaSet added" replicaSet="kube-system/csi-azurefile-controller-7b7f546c46"
I0315 21:18:54.460793 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0315 21:18:54.461641 1 controller_utils.go:581] Controller csi-azurefile-controller-7b7f546c46 created pod csi-azurefile-controller-7b7f546c46-48f2f
I0315 21:18:54.461293 1 replica_set.go:380] Pod csi-azurefile-controller-7b7f546c46-48f2f created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7b7f546c46-48f2f", GenerateName:"csi-azurefile-controller-7b7f546c46-", Namespace:"kube-system", SelfLink:"", UID:"29384467-d2f4-4fc8-a432-068a6a0442fd", ResourceVersion:"1729", Generation:0, CreationTimestamp:time.Date(2023, time.March, 15, 21, 18, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7b7f546c46"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7b7f546c46", UID:"165f482e-b8e2-459e-bda2-c0da5293b630", Controller:(*bool)(0xc002117b2e), BlockOwnerDeletion:(*bool)(0xc002117b2f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 15, 21, 18, 54, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021d63a8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0021d63c0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021d63d8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-stjpc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001a2ac00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-stjpc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-stjpc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-stjpc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-stjpc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-stjpc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a2ad20)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-stjpc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002369840), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021f62c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008eb1f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0021f6330)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0021f6350)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0021f6358), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0021f635c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00291abd0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0315 21:18:54.462160 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0fcab6f9ab34fb7, ext:223918955223, loc:(*time.Location)(0x72c0b80)}}
I0315 21:18:54.462213 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-48f2f"
I0315 21:18:54.462394 1 disruption.go:415] addPod called on pod "csi-azurefile-controller-7b7f546c46-48f2f"
I0315 21:18:54.462699 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-48f2f, PodDisruptionBudget controller will avoid syncing.
I0315 21:18:54.462710 1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-48f2f"
I0315 21:18:54.462455 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7b7f546c46-48f2f" podUID=29384467-d2f4-4fc8-a432-068a6a0442fd
... skipping 2 lines ...
I0315 21:18:54.474584 1 replica_set.go:443] Pod csi-azurefile-controller-7b7f546c46-48f2f updated, objectMeta {Name:csi-azurefile-controller-7b7f546c46-48f2f GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:29384467-d2f4-4fc8-a432-068a6a0442fd ResourceVersion:1729 Generation:0 CreationTimestamp:2023-03-15 21:18:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:165f482e-b8e2-459e-bda2-c0da5293b630 Controller:0xc002117b2e BlockOwnerDeletion:0xc002117b2f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:18:54 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"165f482e-b8e2-459e-bda2-c0da5293b630\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-7b7f546c46-48f2f GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:29384467-d2f4-4fc8-a432-068a6a0442fd ResourceVersion:1731 Generation:0 CreationTimestamp:2023-03-15 21:18:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:165f482e-b8e2-459e-bda2-c0da5293b630 Controller:0xc0022546f7 BlockOwnerDeletion:0xc0022546f8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-15 21:18:54 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"165f482e-b8e2-459e-bda2-c0da5293b630\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0315 21:18:54.474930 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-48f2f"
I0315 21:18:54.475026 1 disruption.go:427] updatePod called on pod "csi-azurefile-controller-7b7f546c46-48f2f"
I0315 21:18:54.475087 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-48f2f, PodDisruptionBudget controller will avoid syncing.
I0315 21:18:54.475110 1 disruption.go:430] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-48f2f"
I0315 21:18:54.478013 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="40.796115ms"
I0315 21:18:54.478196 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:18:54.478784 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2023-03-15 21:18:54.478766652 +0000 UTC m=+223.949762908"
I0315 21:18:54.478834 1 replica_set.go:380] Pod csi-azurefile-controller-7b7f546c46-7kbbd created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7b7f546c46-7kbbd", GenerateName:"csi-azurefile-controller-7b7f546c46-", Namespace:"kube-system", SelfLink:"", UID:"f954cbac-d5d2-41db-8a01-907c10abbf80", ResourceVersion:"1733", Generation:0, CreationTimestamp:time.Date(2023, time.March, 15, 21, 18, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7b7f546c46"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7b7f546c46", UID:"165f482e-b8e2-459e-bda2-c0da5293b630", Controller:(*bool)(0xc002255a6e), BlockOwnerDeletion:(*bool)(0xc002255a6f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 15, 21, 18, 54, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002154180), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc002154198), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021541b0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-frhgd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000da5180), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-frhgd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-frhgd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-frhgd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-frhgd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-frhgd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000da5440)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-frhgd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002746c80), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002255f30), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002168c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002255fa0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002255fc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002255fc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002255fcc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002a9a140), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0315 21:18:54.479329 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0fcab6f9ab34fb7, ext:223918955223, loc:(*time.Location)(0x72c0b80)}}
I0315 21:18:54.479443 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-7kbbd"
I0315 21:18:54.479533 1 disruption.go:415] addPod called on pod "csi-azurefile-controller-7b7f546c46-7kbbd"
I0315 21:18:54.479609 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-7kbbd, PodDisruptionBudget controller will avoid syncing.
I0315 21:18:54.479685 1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-7kbbd"
I0315 21:18:54.479748 1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2023-03-15 21:18:54 +0000 UTC - now: 2023-03-15 21:18:54.479742945 +0000 UTC m=+223.950739201]
... skipping 210 lines ...
I0315 21:19:01.727161 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-snapshot-controller-5b8fcdb667-q2jvs"
I0315 21:19:01.727239 1 disruption.go:415] addPod called on pod "csi-snapshot-controller-5b8fcdb667-q2jvs"
I0315 21:19:01.727256 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-5b8fcdb667-q2jvs, PodDisruptionBudget controller will avoid syncing.
I0315 21:19:01.727261 1 disruption.go:418] No matching pdb for pod "csi-snapshot-controller-5b8fcdb667-q2jvs"
I0315 21:19:01.727284 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-5b8fcdb667-q2jvs" podUID=5473a952-ae7c-4df9-a819-8821e26bd38a
I0315 21:19:01.727401 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="57.940096ms"
I0315 21:19:01.727415 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:19:01.727438 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2023-03-15 21:19:01.727426965 +0000 UTC m=+231.198423221"
I0315 21:19:01.727660 1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2023-03-15 21:19:01 +0000 UTC - now: 2023-03-15 21:19:01.727656563 +0000 UTC m=+231.198652819]
I0315 21:19:01.728000 1 controller_utils.go:581] Controller csi-snapshot-controller-5b8fcdb667 created pod csi-snapshot-controller-5b8fcdb667-q2jvs
I0315 21:19:01.728031 1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-5b8fcdb667, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0315 21:19:01.728264 1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-5b8fcdb667" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-5b8fcdb667-q2jvs"
I0315 21:19:01.740770 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
... skipping 296 lines ...
I0315 21:21:48.221445 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8255/pvc-qzwrb" with version 2515
I0315 21:21:48.221595 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:21:48.221692 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:21:48.221768 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:21:48.221858 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:21:48.222012 1 pv_controller.go:1775] operation "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" is already running, skipping
I0315 21:21:48.227953 1 azure_provision.go:108] failed to get azure provider
I0315 21:21:48.227998 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:21:48.228040 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:21:48.72802818 +0000 UTC m=+398.199024436 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:21:48.228156 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:21:48.771930 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 70 items received
I0315 21:21:51.289961 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 66 items received
I0315 21:21:52.708789 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-5204
I0315 21:21:52.758406 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5204" (2.8µs)
I0315 21:21:52.758402 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-5204, name default, uid 0a46831c-ef09-4c07-9bde-f135e4046b24, event type delete
I0315 21:21:52.758436 1 tokens_controller.go:252] syncServiceAccount(azurefile-5204/default), service account deleted, removing tokens
... skipping 12 lines ...
I0315 21:21:57.899820 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:21:57.900052 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:21:57.900100 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:21:57.900156 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:21:57.900404 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:21:57.900432 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:21:57.910642 1 azure_provision.go:108] failed to get azure provider
I0315 21:21:57.910664 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:21:57.910767 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:21:58.910684102 +0000 UTC m=+408.381680458 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:21:57.910853 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:21:58.723394 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 33 items received
I0315 21:21:58.897277 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:22:02.718614 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 60 items received
I0315 21:22:03.665524 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="66.699µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46996" resp=200
I0315 21:22:03.758203 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 13 items received
I0315 21:22:07.894315 1 gc_controller.go:161] GC'ing orphaned
... skipping 4 lines ...
I0315 21:22:12.899957 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:22:12.900009 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:22:12.900020 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:22:12.900029 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:22:12.900103 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:22:12.900115 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:22:12.909898 1 azure_provision.go:108] failed to get azure provider
I0315 21:22:12.909919 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:22:12.909948 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:22:14.909936208 +0000 UTC m=+424.380932564 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:22:12.910104 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:22:13.665102 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49602" resp=200
I0315 21:22:19.621045 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:22:22.718468 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 14 items received
I0315 21:22:23.665299 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="81.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45986" resp=200
I0315 21:22:23.805069 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0315 21:22:27.733916 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 4 lines ...
I0315 21:22:27.900616 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:22:27.900639 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:22:27.900645 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:22:27.900678 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:22:27.900699 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:22:27.900727 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:22:27.909967 1 azure_provision.go:108] failed to get azure provider
I0315 21:22:27.909991 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:22:27.910140 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:22:31.910126146 +0000 UTC m=+441.381122402 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:22:27.910259 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:22:28.910248 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:22:33.665720 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="66.6µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49966" resp=200
I0315 21:22:35.719660 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0315 21:22:36.926195 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:22:38.420921 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:22:38.573747 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
... skipping 4 lines ...
I0315 21:22:42.901225 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:22:42.901282 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:22:42.901294 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:22:42.901334 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:22:42.901380 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:22:42.901423 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:22:42.908442 1 azure_provision.go:108] failed to get azure provider
I0315 21:22:42.908464 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:22:42.908605 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:22:50.908592117 +0000 UTC m=+460.379588473 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:22:42.908711 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:22:43.665362 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:39214" resp=200
I0315 21:22:45.707026 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 29 items received
I0315 21:22:45.726911 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 2 items received
I0315 21:22:47.716675 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0315 21:22:47.895671 1 gc_controller.go:161] GC'ing orphaned
I0315 21:22:47.895696 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 4 lines ...
I0315 21:22:57.901988 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:22:57.902039 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:22:57.902046 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:22:57.902056 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:22:57.902068 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:22:57.902075 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:22:57.908980 1 azure_provision.go:108] failed to get azure provider
I0315 21:22:57.908999 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:22:57.909111 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:23:13.909016863 +0000 UTC m=+483.380013219 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:22:57.909218 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:22:58.923108 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:22:59.977625 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:23:01.719932 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 10 items received
I0315 21:23:03.665045 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="85.699µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46850" resp=200
I0315 21:23:06.707739 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 191 items received
I0315 21:23:07.896752 1 gc_controller.go:161] GC'ing orphaned
... skipping 23 lines ...
I0315 21:23:27.902861 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:23:27.902928 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:23:27.902958 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:23:27.903003 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:23:27.903039 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:23:27.903076 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:23:27.905299 1 azure_provision.go:108] failed to get azure provider
I0315 21:23:27.905356 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:23:27.905408 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:23:59.905395394 +0000 UTC m=+529.376391750 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:23:27.905458 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:23:28.934730 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:23:31.721648 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 18 items received
I0315 21:23:31.726356 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 39 items received
I0315 21:23:32.716892 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 21 items received
I0315 21:23:32.776263 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 85 items received
I0315 21:23:33.320230 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 30 items received
... skipping 39 lines ...
I0315 21:24:12.905042 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:24:12.905064 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:24:12.905070 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:24:12.905079 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:24:12.905098 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:24:12.905107 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:24:12.908649 1 azure_provision.go:108] failed to get azure provider
I0315 21:24:12.908689 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:24:12.908824 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:25:16.908701824 +0000 UTC m=+606.379698180 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:24:12.908853 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:24:13.665446 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="67.899µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46596" resp=200
I0315 21:24:22.720620 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v2.HorizontalPodAutoscaler total 0 items received
I0315 21:24:23.665307 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57120" resp=200
I0315 21:24:24.366247 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 15 items received
I0315 21:24:25.970330 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 84 items received
I0315 21:24:27.739235 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 77 lines ...
I0315 21:25:27.908121 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:25:27.908156 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8255/pvc-qzwrb]: no volume found
I0315 21:25:27.908162 1 pv_controller.go:1455] provisionClaim[azurefile-8255/pvc-qzwrb]: started
I0315 21:25:27.908212 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]]
I0315 21:25:27.908244 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8255/pvc-qzwrb] started, class: "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq"
I0315 21:25:27.908291 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8255/pvc-qzwrb]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:25:27.911473 1 azure_provision.go:108] failed to get azure provider
I0315 21:25:27.911493 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8255/pvc-qzwrb" with StorageClass "azurefile-8255-kubernetes.io-azure-file-dynamic-sc-9b6fq": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:25:27.911663 1 goroutinemap.go:150] Operation for "provision-azurefile-8255/pvc-qzwrb[78b93b26-5f70-40c0-aa5f-c45b6fa166d6]" failed. No retries permitted until 2023-03-15 21:27:29.911640149 +0000 UTC m=+739.382636405 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:25:27.911738 1 event.go:294] "Event occurred" object="azurefile-8255/pvc-qzwrb" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:25:28.313653 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0315 21:25:29.000151 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-15 21:25:29.000103467 +0000 UTC m=+618.471099723"
I0315 21:25:29.000687 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="572.696µs"
I0315 21:25:29.052092 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:25:29.418713 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 13 items received
I0315 21:25:29.920977 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 95 lines ...
I0315 21:26:50.372634 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:26:50.372648 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:26:50.372684 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:26:50.372692 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:26:50.372698 1 pv_controller.go:1775] operation "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" is already running, skipping
I0315 21:26:50.372850 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-9696/pvc-4mwlj" with version 3575
I0315 21:26:50.374088 1 azure_provision.go:108] failed to get azure provider
I0315 21:26:50.374105 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:26:50.374152 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:26:50.87411584 +0000 UTC m=+700.345112096 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:26:50.374223 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:26:53.665571 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="65.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49560" resp=200
I0315 21:26:53.937106 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8255
I0315 21:26:53.982086 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-8255/pvc-qzwrb"
I0315 21:26:53.982128 1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-8255/pvc-qzwrb"
I0315 21:26:53.982137 1 pvc_protection_controller.go:230] "Looking for Pods using PVC in the Informer's cache" PVC="azurefile-8255/pvc-qzwrb"
I0315 21:26:53.982145 1 pvc_protection_controller.go:251] "No Pod using PVC was found in the Informer's cache" PVC="azurefile-8255/pvc-qzwrb"
... skipping 21 lines ...
I0315 21:26:54.112111 1 namespace_controller.go:180] Finished syncing namespace "azurefile-8255" (183.561247ms)
I0315 21:26:54.112122 1 namespace_controller.go:157] Content remaining in namespace azurefile-8255, waiting 8 seconds
I0315 21:26:54.407059 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-7916
I0315 21:26:54.446489 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-7916, name kube-root-ca.crt, uid 57331147-7b80-4529-9d79-2dd62ec59909, event type delete
I0315 21:26:54.447989 1 publisher.go:186] Finished syncing namespace "azurefile-7916" (1.46709ms)
I0315 21:26:54.471256 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-7916, name default-token-6mp2p, uid 9683dd9c-c835-45c2-8e4b-e62520866599, event type delete
E0315 21:26:54.481901 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-7916/default: secrets "default-token-wcpnh" is forbidden: unable to create new content in namespace azurefile-7916 because it is being terminated
I0315 21:26:54.489531 1 tokens_controller.go:252] syncServiceAccount(azurefile-7916/default), service account deleted, removing tokens
I0315 21:26:54.489709 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7916" (1.5µs)
I0315 21:26:54.489786 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-7916, name default, uid a8ea0800-e879-42d1-bc4e-82c781f7ebc1, event type delete
I0315 21:26:54.544016 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-7916, estimate: 0, errors: <nil>
I0315 21:26:54.544425 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7916" (2.2µs)
I0315 21:26:54.553274 1 namespace_controller.go:180] Finished syncing namespace "azurefile-7916" (148.371594ms)
... skipping 13 lines ...
I0315 21:26:57.911376 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:26:57.911436 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:26:57.911447 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:26:57.911457 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:26:57.911477 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:26:57.911484 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:26:57.917962 1 azure_provision.go:108] failed to get azure provider
I0315 21:26:57.917982 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:26:57.918011 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:26:58.917999495 +0000 UTC m=+708.388995851 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:26:57.918135 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:26:59.099441 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:26:59.113980 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8255
I0315 21:26:59.233578 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8255" (2.3µs)
I0315 21:26:59.233689 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-8255, estimate: 0, errors: <nil>
I0315 21:26:59.246690 1 namespace_controller.go:180] Finished syncing namespace "azurefile-8255" (134.193198ms)
I0315 21:26:59.545224 1 namespace_controller.go:185] Namespace has been deleted azurefile-7916
... skipping 11 lines ...
I0315 21:27:12.912561 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:27:12.912583 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:27:12.912589 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:27:12.912599 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:27:12.912617 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:27:12.912629 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:27:12.920035 1 azure_provision.go:108] failed to get azure provider
I0315 21:27:12.920057 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:27:12.920440 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
E0315 21:27:12.920469 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:27:14.920076467 +0000 UTC m=+724.391072723 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:27:13.665331 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="72µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46036" resp=200
I0315 21:27:16.916439 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:27:22.921120 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:27:23.665171 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47150" resp=200
I0315 21:27:25.291262 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 0 items received
I0315 21:27:27.746743 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 4 lines ...
I0315 21:27:27.913166 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:27:27.913188 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:27:27.913193 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:27:27.913202 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:27:27.913215 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:27:27.913221 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:27:27.918549 1 azure_provision.go:108] failed to get azure provider
I0315 21:27:27.918571 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:27:27.918709 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:27:31.918695471 +0000 UTC m=+741.389691827 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:27:27.918762 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:27:29.114195 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:27:31.624706 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:27:31.711940 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 10 items received
I0315 21:27:33.665624 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="70.599µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:50614" resp=200
I0315 21:27:41.292629 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0315 21:27:42.747723 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0315 21:27:42.913690 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:27:42.913754 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:27:42.913764 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:27:42.913775 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:27:42.913797 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:27:42.913803 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:27:42.919221 1 azure_provision.go:108] failed to get azure provider
I0315 21:27:42.919243 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:27:42.919270 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:27:50.919259575 +0000 UTC m=+760.390255831 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:27:42.919619 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:27:43.665147 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="72µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54376" resp=200
I0315 21:27:47.907883 1 gc_controller.go:161] GC'ing orphaned
I0315 21:27:47.907907 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:27:53.665706 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46348" resp=200
I0315 21:27:57.748606 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:27:57.913926 1 pv_controller_base.go:556] resyncing PV controller
I0315 21:27:57.914056 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-9696/pvc-4mwlj" with version 3575
I0315 21:27:57.914079 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:27:57.914133 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:27:57.914144 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:27:57.914153 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:27:57.914251 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:27:57.914284 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:27:57.928316 1 azure_provision.go:108] failed to get azure provider
I0315 21:27:57.928374 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:27:57.928448 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:28:13.928422683 +0000 UTC m=+783.399419039 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:27:57.928521 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:27:59.129224 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:28:03.665278 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="79.999µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34588" resp=200
I0315 21:28:06.937921 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:28:07.908486 1 gc_controller.go:161] GC'ing orphaned
I0315 21:28:07.908513 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:28:12.749240 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 16 lines ...
I0315 21:28:27.914752 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:28:27.914805 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:28:27.914863 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:28:27.914905 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:28:27.914938 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:28:27.914958 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:28:27.918550 1 azure_provision.go:108] failed to get azure provider
I0315 21:28:27.918569 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:28:27.918598 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:28:59.91858706 +0000 UTC m=+829.389583316 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:28:27.918623 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:28:29.150647 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:28:32.538650 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:28:33.664993 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="82.899µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51806" resp=200
I0315 21:28:35.424572 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:28:42.749784 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:28:42.914974 1 pv_controller_base.go:556] resyncing PV controller
... skipping 36 lines ...
I0315 21:29:12.915805 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:29:12.915827 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:29:12.915838 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:29:12.915847 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:29:12.915901 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:29:12.915912 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:29:12.923043 1 azure_provision.go:108] failed to get azure provider
I0315 21:29:12.923068 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:29:12.923212 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:30:16.923198934 +0000 UTC m=+906.394195190 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:29:12.923297 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:29:13.665505 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="73.1µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51756" resp=200
I0315 21:29:21.727933 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
I0315 21:29:23.575152 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:29:23.665065 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="65.4µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45764" resp=200
I0315 21:29:27.751578 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:29:27.910731 1 gc_controller.go:161] GC'ing orphaned
... skipping 68 lines ...
I0315 21:30:27.919382 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:30:27.919557 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9696/pvc-4mwlj]: no volume found
I0315 21:30:27.919578 1 pv_controller.go:1455] provisionClaim[azurefile-9696/pvc-4mwlj]: started
I0315 21:30:27.919683 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]]
I0315 21:30:27.919705 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9696/pvc-4mwlj] started, class: "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk"
I0315 21:30:27.919736 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9696/pvc-4mwlj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:30:27.923488 1 azure_provision.go:108] failed to get azure provider
I0315 21:30:27.923512 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9696/pvc-4mwlj" with StorageClass "azurefile-9696-kubernetes.io-azure-file-dynamic-sc-8wfkk": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:30:27.923646 1 goroutinemap.go:150] Operation for "provision-azurefile-9696/pvc-4mwlj[5fef1225-c812-48f7-8891-a5e6547b0296]" failed. No retries permitted until 2023-03-15 21:32:29.923632214 +0000 UTC m=+1039.394628470 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:30:27.923708 1 event.go:294] "Event occurred" object="azurefile-9696/pvc-4mwlj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:30:28.314494 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0315 21:30:29.245137 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:30:31.826567 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0315 21:30:33.665487 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="66.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43106" resp=200
I0315 21:30:33.778451 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:30:42.754705 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 90 lines ...
I0315 21:31:51.698592 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6810/pvc-g5qrt" with version 4632
I0315 21:31:51.698601 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:31:51.698614 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:31:51.698635 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:31:51.698645 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:31:51.698650 1 pv_controller.go:1775] operation "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" is already running, skipping
I0315 21:31:51.700174 1 azure_provision.go:108] failed to get azure provider
I0315 21:31:51.700194 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:31:51.700224 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:31:52.200212823 +0000 UTC m=+1001.671209079 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:31:51.700336 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:31:52.773167 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 11 items received
I0315 21:31:53.665652 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="83.799µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57686" resp=200
I0315 21:31:53.729506 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 7 items received
I0315 21:31:55.731737 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0315 21:31:56.089488 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9696
I0315 21:31:56.131954 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-9696/pvc-4mwlj"
... skipping 29 lines ...
I0315 21:31:57.922455 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:31:57.922485 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:31:57.922506 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:31:57.922545 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:31:57.922673 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:31:57.922685 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:31:57.927283 1 azure_provision.go:108] failed to get azure provider
I0315 21:31:57.927302 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:31:57.927330 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:31:58.927318245 +0000 UTC m=+1008.398314501 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:31:57.927467 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:31:59.297968 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:32:00.938868 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:32:01.326686 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9696
I0315 21:32:01.488990 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9696" (1.9µs)
I0315 21:32:01.489430 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-9696, estimate: 0, errors: <nil>
I0315 21:32:01.498735 1 namespace_controller.go:180] Finished syncing namespace "azurefile-9696" (180.552456ms)
... skipping 11 lines ...
I0315 21:32:12.922901 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:32:12.922959 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:32:12.922974 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:32:12.922984 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:32:12.923008 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:32:12.923055 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:32:12.927788 1 azure_provision.go:108] failed to get azure provider
I0315 21:32:12.927809 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:32:12.927838 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:32:14.927826347 +0000 UTC m=+1024.398822603 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:32:12.927968 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:32:13.562356 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:32:13.665494 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51392" resp=200
I0315 21:32:14.295382 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 0 items received
I0315 21:32:15.136614 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:32:23.665199 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="81.799µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42172" resp=200
I0315 21:32:24.722784 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
... skipping 5 lines ...
I0315 21:32:27.923317 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:32:27.923364 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:32:27.923369 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:32:27.923379 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:32:27.923395 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:32:27.923404 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:32:27.930943 1 azure_provision.go:108] failed to get azure provider
I0315 21:32:27.930969 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:32:27.930999 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:32:31.930986914 +0000 UTC m=+1041.401983170 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:32:27.931189 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:32:29.312313 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:32:33.665231 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.499µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34446" resp=200
I0315 21:32:34.422920 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:32:38.720271 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 0 items received
I0315 21:32:39.927922 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:32:42.761400 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0315 21:32:42.923713 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:32:42.923735 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:32:42.923745 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:32:42.923754 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:32:42.923797 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:32:42.923811 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:32:42.927655 1 azure_provision.go:108] failed to get azure provider
I0315 21:32:42.927674 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:32:42.927703 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:32:50.927692409 +0000 UTC m=+1060.398688765 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:32:42.927862 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:32:43.665381 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="77.599µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:38236" resp=200
I0315 21:32:43.926957 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:32:47.371224 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 0 items received
I0315 21:32:47.918258 1 gc_controller.go:161] GC'ing orphaned
I0315 21:32:47.918312 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:32:48.723799 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
... skipping 8 lines ...
I0315 21:32:57.924046 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:32:57.924089 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:32:57.924123 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:32:57.924143 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:32:57.924187 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:32:57.924206 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:32:57.927898 1 azure_provision.go:108] failed to get azure provider
I0315 21:32:57.927919 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:32:57.927950 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:33:13.927936936 +0000 UTC m=+1083.398933292 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:32:57.927976 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:32:58.722342 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0315 21:32:59.328332 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:33:02.721927 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0315 21:33:03.665588 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="69.2µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44706" resp=200
I0315 21:33:07.919257 1 gc_controller.go:161] GC'ing orphaned
I0315 21:33:07.919282 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 27 lines ...
I0315 21:33:27.924490 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:33:27.924549 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:33:27.924593 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:33:27.924608 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:33:27.924627 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:33:27.924687 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:33:27.930141 1 azure_provision.go:108] failed to get azure provider
I0315 21:33:27.930159 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:33:27.930265 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:33:59.930252657 +0000 UTC m=+1129.401248913 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:33:27.930375 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:33:29.343529 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:33:33.665135 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="70.299µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41704" resp=200
I0315 21:33:34.924288 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:33:36.305062 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 9 items received
I0315 21:33:40.975316 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 18 items received
I0315 21:33:41.296576 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 6 items received
... skipping 33 lines ...
I0315 21:34:12.926306 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:34:12.926328 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:34:12.926339 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:34:12.926349 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:34:12.926365 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:34:12.926394 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:34:12.930142 1 azure_provision.go:108] failed to get azure provider
I0315 21:34:12.930163 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:34:12.930193 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:35:16.930181601 +0000 UTC m=+1206.401177857 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:34:12.930251 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:34:13.431958 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:34:13.665075 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="67.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34654" resp=200
I0315 21:34:23.665600 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="65.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:38722" resp=200
I0315 21:34:24.716152 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 0 items received
I0315 21:34:27.764796 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:34:27.921761 1 gc_controller.go:161] GC'ing orphaned
... skipping 65 lines ...
I0315 21:35:27.929907 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:35:27.929974 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6810/pvc-g5qrt]: no volume found
I0315 21:35:27.929985 1 pv_controller.go:1455] provisionClaim[azurefile-6810/pvc-g5qrt]: started
I0315 21:35:27.929995 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]]
I0315 21:35:27.930058 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6810/pvc-g5qrt] started, class: "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6"
I0315 21:35:27.930134 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6810/pvc-g5qrt]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:35:27.938127 1 azure_provision.go:108] failed to get azure provider
I0315 21:35:27.938185 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6810/pvc-g5qrt" with StorageClass "azurefile-6810-kubernetes.io-azure-file-dynamic-sc-dckp6": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:35:27.938287 1 goroutinemap.go:150] Operation for "provision-azurefile-6810/pvc-g5qrt[a19050c3-f1ad-4b04-80a9-58ac36ca155c]" failed. No retries permitted until 2023-03-15 21:37:29.938273954 +0000 UTC m=+1339.409270210 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:35:27.938611 1 event.go:294] "Event occurred" object="azurefile-6810/pvc-g5qrt" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:35:28.314651 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0315 21:35:29.432901 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:35:29.779982 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 13 items received
I0315 21:35:31.721767 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 0 items received
I0315 21:35:31.944785 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:35:33.665217 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="67.299µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56960" resp=200
... skipping 80 lines ...
I0315 21:36:53.183689 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-8836/pvc-cgjkq" with version 5689
I0315 21:36:53.183698 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:36:53.183712 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:36:53.183721 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:36:53.183731 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:36:53.183773 1 pv_controller.go:1775] operation "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" is already running, skipping
I0315 21:36:53.184889 1 azure_provision.go:108] failed to get azure provider
I0315 21:36:53.185022 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:36:53.185119 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:36:53.68510679 +0000 UTC m=+1303.156103146 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:36:53.185259 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:36:53.665353 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="70.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:35406" resp=200
I0315 21:36:54.931779 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:36:55.777255 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 10 items received
I0315 21:36:57.446238 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6810
I0315 21:36:57.460504 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-6810/pvc-g5qrt"
I0315 21:36:57.460929 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6810/pvc-g5qrt" with version 5703
... skipping 28 lines ...
I0315 21:36:57.933221 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:36:57.933309 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:36:57.933324 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:36:57.933382 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:36:57.933419 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:36:57.933425 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:36:57.935961 1 azure_provision.go:108] failed to get azure provider
I0315 21:36:57.935982 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:36:57.936005 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:36:58.93599414 +0000 UTC m=+1308.406990496 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:36:57.936078 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:36:59.491150 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:36:59.720672 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 8 items received
I0315 21:37:02.593374 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6810
I0315 21:37:02.706084 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6810" (2.5µs)
I0315 21:37:02.706124 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-6810, estimate: 0, errors: <nil>
I0315 21:37:02.715251 1 namespace_controller.go:180] Finished syncing namespace "azurefile-6810" (128.374626ms)
... skipping 8 lines ...
I0315 21:37:12.934354 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:37:12.934378 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:37:12.934429 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:37:12.934455 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:37:12.934479 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:37:12.934486 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:37:12.936652 1 azure_provision.go:108] failed to get azure provider
I0315 21:37:12.936695 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:37:12.936817 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:37:14.936804159 +0000 UTC m=+1324.407800515 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:37:12.936886 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:37:13.665339 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.999µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45152" resp=200
I0315 21:37:15.765614 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0315 21:37:23.665321 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="71.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60712" resp=200
I0315 21:37:27.772239 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:37:27.929598 1 gc_controller.go:161] GC'ing orphaned
I0315 21:37:27.929627 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 2 lines ...
I0315 21:37:27.934924 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:37:27.934976 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:37:27.934984 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:37:27.934994 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:37:27.935015 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:37:27.935026 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:37:27.939800 1 azure_provision.go:108] failed to get azure provider
I0315 21:37:27.939830 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:37:27.940055 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:37:31.939850054 +0000 UTC m=+1341.410846310 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:37:27.940268 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:37:29.507315 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:37:30.732185 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 2 items received
I0315 21:37:33.665516 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="65.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56356" resp=200
I0315 21:37:33.731500 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v2.HorizontalPodAutoscaler total 8 items received
I0315 21:37:34.422926 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0315 21:37:42.772970 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0315 21:37:42.935573 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:37:42.935609 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:37:42.935618 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:37:42.935630 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:37:42.935653 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:37:42.935678 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:37:42.951344 1 azure_provision.go:108] failed to get azure provider
I0315 21:37:42.951366 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:37:42.951418 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:37:50.951405402 +0000 UTC m=+1360.422401758 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:37:42.951565 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:37:43.665084 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.899µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:59562" resp=200
I0315 21:37:43.729733 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 2 items received
I0315 21:37:44.732182 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 9 items received
I0315 21:37:47.930188 1 gc_controller.go:161] GC'ing orphaned
I0315 21:37:47.930212 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:37:48.580920 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
... skipping 5 lines ...
I0315 21:37:57.936270 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:37:57.936296 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:37:57.936306 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:37:57.936316 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:37:57.936375 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:37:57.936387 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:37:57.939886 1 azure_provision.go:108] failed to get azure provider
I0315 21:37:57.939909 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:37:57.940033 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:38:13.940020333 +0000 UTC m=+1383.411016689 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:37:57.940100 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:37:59.518665 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:38:03.665452 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="70.699µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57164" resp=200
I0315 21:38:07.930606 1 gc_controller.go:161] GC'ing orphaned
I0315 21:38:07.930634 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:38:11.936713 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:38:12.479280 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
... skipping 17 lines ...
I0315 21:38:27.937913 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:38:27.937960 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:38:27.937984 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:38:27.938005 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:38:27.938043 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:38:27.938071 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:38:27.941967 1 azure_provision.go:108] failed to get azure provider
I0315 21:38:27.941987 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:38:27.942015 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:38:59.942004113 +0000 UTC m=+1429.413000369 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:38:27.942156 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:38:28.731905 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 3 items received
I0315 21:38:29.082041 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0315 21:38:29.530268 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:38:31.326204 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 4 items received
I0315 21:38:33.665542 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="82.8µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:55636" resp=200
I0315 21:38:33.736503 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 8 items received
... skipping 36 lines ...
I0315 21:39:12.939415 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:39:12.939498 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:39:12.939510 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:39:12.939521 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:39:12.939573 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:39:12.939597 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:39:12.949903 1 azure_provision.go:108] failed to get azure provider
I0315 21:39:12.949926 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:39:12.949977 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:40:16.949943267 +0000 UTC m=+1506.420939523 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:39:12.950127 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:39:13.665845 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="76.299µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47440" resp=200
I0315 21:39:17.783202 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0315 21:39:17.934728 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:39:23.665115 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.099µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54728" resp=200
I0315 21:39:24.727300 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 8 items received
I0315 21:39:27.778787 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 73 lines ...
I0315 21:40:27.942530 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:40:27.942594 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:40:27.942622 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:40:27.942662 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:40:27.942700 1 pv_controller.go:1496] provisionClaimOperation [azurefile-8836/pvc-cgjkq] started, class: "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62"
I0315 21:40:27.942747 1 pv_controller.go:1511] provisionClaimOperation [azurefile-8836/pvc-cgjkq]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:40:27.945223 1 azure_provision.go:108] failed to get azure provider
I0315 21:40:27.945244 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-8836/pvc-cgjkq" with StorageClass "azurefile-8836-kubernetes.io-azure-file-dynamic-sc-5fw62": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:40:27.945273 1 goroutinemap.go:150] Operation for "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" failed. No retries permitted until 2023-03-15 21:42:29.945261648 +0000 UTC m=+1639.416258004 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:40:27.945542 1 event.go:294] "Event occurred" object="azurefile-8836/pvc-cgjkq" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:40:28.315487 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0315 21:40:29.603416 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:40:31.564548 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 5 items received
I0315 21:40:31.795639 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 5 items received
I0315 21:40:33.665689 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="79.4µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49678" resp=200
I0315 21:40:35.309190 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 5 items received
... skipping 87 lines ...
I0315 21:41:54.458438 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-9021/pvc-7svns" with version 6737
I0315 21:41:54.458462 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:41:54.458497 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:41:54.458514 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:41:54.458539 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:41:54.458567 1 pv_controller.go:1775] operation "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" is already running, skipping
I0315 21:41:54.459930 1 azure_provision.go:108] failed to get azure provider
I0315 21:41:54.459951 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:41:54.460085 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:41:54.960072939 +0000 UTC m=+1604.431069195 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:41:54.460219 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:41:57.602718 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0315 21:41:57.711584 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 7 items received
I0315 21:41:57.786100 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:41:57.946260 1 pv_controller_base.go:556] resyncing PV controller
I0315 21:41:57.946339 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-9021/pvc-7svns" with version 6737
I0315 21:41:57.946356 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 5 lines ...
I0315 21:41:57.946493 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-8836/pvc-cgjkq]: no volume found
I0315 21:41:57.946498 1 pv_controller.go:1455] provisionClaim[azurefile-8836/pvc-cgjkq]: started
I0315 21:41:57.946504 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]]
I0315 21:41:57.946509 1 pv_controller.go:1777] operation "provision-azurefile-8836/pvc-cgjkq[e6194f97-719b-41b9-a0e5-ee3c1b20b61a]" postponed due to exponential backoff
I0315 21:41:57.946531 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:41:57.946538 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:41:57.949261 1 azure_provision.go:108] failed to get azure provider
I0315 21:41:57.949283 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:41:57.949346 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:41:58.949299934 +0000 UTC m=+1608.420296190 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:41:57.949411 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:41:58.742027 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 7 items received
I0315 21:41:58.848724 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8836
I0315 21:41:58.898674 1 tokens_controller.go:252] syncServiceAccount(azurefile-8836/default), service account deleted, removing tokens
I0315 21:41:58.898860 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8836, name default, uid a0381cc4-57f8-484e-9d33-31eabd44b0f0, event type delete
I0315 21:41:58.899005 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8836" (1.9µs)
I0315 21:41:58.945525 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8836, name default-token-smqw9, uid ce24811c-2e1c-4954-b7ce-2d0288a8d187, event type delete
... skipping 37 lines ...
I0315 21:42:12.947129 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:42:12.947159 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:42:12.947178 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:42:12.947200 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:42:12.947251 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:42:12.947277 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:42:12.956082 1 azure_provision.go:108] failed to get azure provider
I0315 21:42:12.956114 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:42:12.956274 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:42:14.956234036 +0000 UTC m=+1624.427230292 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:42:12.956359 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:42:13.665265 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="83.5µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56132" resp=200
I0315 21:42:18.737744 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 11 items received
I0315 21:42:23.665748 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="69.599µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:37190" resp=200
I0315 21:42:27.787222 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:42:27.941288 1 gc_controller.go:161] GC'ing orphaned
I0315 21:42:27.941313 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 2 lines ...
I0315 21:42:27.947593 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:42:27.947660 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:42:27.947671 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:42:27.947681 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:42:27.947702 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:42:27.947711 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:42:27.956041 1 azure_provision.go:108] failed to get azure provider
I0315 21:42:27.956064 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:42:27.956211 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:42:31.956197923 +0000 UTC m=+1641.427194279 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:42:27.956297 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:42:29.680550 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:42:33.665358 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.899µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:50368" resp=200
I0315 21:42:36.299432 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 10 items received
I0315 21:42:37.939376 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:42:41.747024 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 15 items received
I0315 21:42:42.787768 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0315 21:42:42.947963 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:42:42.947986 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:42:42.947992 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:42:42.948000 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:42:42.948020 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:42:42.948031 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:42:42.952539 1 azure_provision.go:108] failed to get azure provider
I0315 21:42:42.952560 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:42:42.952583 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:42:50.952571253 +0000 UTC m=+1660.423567509 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:42:42.952828 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:42:43.665558 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="132.299µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45510" resp=200
I0315 21:42:47.942236 1 gc_controller.go:161] GC'ing orphaned
I0315 21:42:47.942261 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:42:48.725940 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 10 items received
I0315 21:42:49.935470 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:42:52.731314 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
... skipping 7 lines ...
I0315 21:42:57.948565 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:42:57.948627 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:42:57.948690 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:42:57.948734 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:42:57.948777 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:42:57.948784 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:42:57.951326 1 azure_provision.go:108] failed to get azure provider
I0315 21:42:57.951346 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:42:57.951376 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:43:13.951363987 +0000 UTC m=+1683.422360243 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:42:57.951491 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:42:59.697162 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:43:03.665077 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="67.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60328" resp=200
I0315 21:43:07.943148 1 gc_controller.go:161] GC'ing orphaned
I0315 21:43:07.943173 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:43:10.938817 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:43:12.788119 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 17 lines ...
I0315 21:43:27.949691 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:43:27.949736 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:43:27.949763 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:43:27.949800 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:43:27.949825 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:43:27.949859 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:43:27.953057 1 azure_provision.go:108] failed to get azure provider
I0315 21:43:27.953081 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:43:27.953214 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:43:59.953202367 +0000 UTC m=+1729.424198623 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:43:27.953359 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:43:29.710211 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:43:33.665648 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="71.799µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:55106" resp=200
I0315 21:43:39.330365 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0315 21:43:42.789390 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:43:42.950182 1 pv_controller_base.go:556] resyncing PV controller
I0315 21:43:42.950595 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-9021/pvc-7svns" with version 6737
... skipping 30 lines ...
I0315 21:44:12.951264 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:44:12.951283 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:44:12.951295 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:44:12.951303 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:44:12.951322 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:44:12.951329 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:44:12.961175 1 azure_provision.go:108] failed to get azure provider
I0315 21:44:12.961196 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:44:12.961319 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:45:16.961209755 +0000 UTC m=+1806.432206011 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:44:12.961591 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:44:13.665510 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="69.699µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:35904" resp=200
I0315 21:44:18.435079 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0315 21:44:18.831454 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0315 21:44:23.665486 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:33728" resp=200
I0315 21:44:27.790794 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:44:27.944917 1 gc_controller.go:161] GC'ing orphaned
... skipping 64 lines ...
I0315 21:45:27.955594 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:45:27.955616 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:45:27.955621 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
I0315 21:45:27.955630 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]]
I0315 21:45:27.955641 1 pv_controller.go:1496] provisionClaimOperation [azurefile-9021/pvc-7svns] started, class: "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf"
I0315 21:45:27.955647 1 pv_controller.go:1511] provisionClaimOperation [azurefile-9021/pvc-7svns]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:45:27.963702 1 azure_provision.go:108] failed to get azure provider
I0315 21:45:27.963781 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-9021/pvc-7svns" with StorageClass "azurefile-9021-kubernetes.io-azure-file-dynamic-sc-zqgwf": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:45:27.963852 1 goroutinemap.go:150] Operation for "provision-azurefile-9021/pvc-7svns[44835163-7fbd-4282-99fe-bfb45e425375]" failed. No retries permitted until 2023-03-15 21:47:29.963825585 +0000 UTC m=+1939.434821941 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:45:27.964018 1 event.go:294] "Event occurred" object="azurefile-9021/pvc-7svns" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:45:28.315737 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0315 21:45:29.113962 1 node_lifecycle_controller.go:1046] Node capz-jtbghr-md-0-vnk86 ReadyCondition updated. Updating timestamp.
I0315 21:45:29.788397 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:45:31.994147 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:45:33.664895 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:39710" resp=200
I0315 21:45:42.794261 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 86 lines ...
I0315 21:46:56.164465 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-3532/pvc-zpv7n" with version 7794
I0315 21:46:56.164601 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:46:56.164698 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:46:56.164709 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:46:56.164764 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:46:56.164775 1 pv_controller.go:1775] operation "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" is already running, skipping
I0315 21:46:56.165732 1 azure_provision.go:108] failed to get azure provider
I0315 21:46:56.165751 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:46:56.165785 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:46:56.665774382 +0000 UTC m=+1906.136770738 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:46:56.165954 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:46:57.796588 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0315 21:46:57.958685 1 pv_controller_base.go:556] resyncing PV controller
I0315 21:46:57.958852 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-9021/pvc-7svns" with version 6737
I0315 21:46:57.958875 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-9021/pvc-7svns]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:46:57.958894 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-9021/pvc-7svns]: no volume found
I0315 21:46:57.958904 1 pv_controller.go:1455] provisionClaim[azurefile-9021/pvc-7svns]: started
... skipping 3 lines ...
I0315 21:46:57.958945 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:46:57.958953 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:46:57.958959 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:46:57.958964 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:46:57.958980 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:46:57.958988 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:46:57.961807 1 azure_provision.go:108] failed to get azure provider
I0315 21:46:57.961826 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:46:57.961855 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:46:58.961843721 +0000 UTC m=+1908.432839977 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:46:57.961983 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:46:59.837292 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:47:00.209412 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9021
I0315 21:47:00.251830 1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-9021, name pvc-7svns.174cb5acb3edc3ab, uid 0a9615b2-ecd5-4d2e-88cd-2946a2016fa3, event type delete
I0315 21:47:00.268027 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-9021, name default, uid ff546833-f5a0-43d7-95eb-cf80e53f9f39, event type delete
I0315 21:47:00.268190 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9021" (2µs)
I0315 21:47:00.268320 1 tokens_controller.go:252] syncServiceAccount(azurefile-9021/default), service account deleted, removing tokens
... skipping 19 lines ...
I0315 21:47:00.390711 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9021" (2.4µs)
I0315 21:47:00.390958 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-9021, estimate: 15, errors: <nil>
I0315 21:47:00.390974 1 namespace_controller.go:180] Finished syncing namespace "azurefile-9021" (184.152853ms)
I0315 21:47:00.391042 1 namespace_controller.go:157] Content remaining in namespace azurefile-9021, waiting 8 seconds
I0315 21:47:00.672050 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6183
I0315 21:47:00.692195 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-6183, name default-token-rkbj7, uid dbca1692-e7b8-4606-933a-938839f96be2, event type delete
E0315 21:47:00.704408 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-6183/default: secrets "default-token-t2btg" is forbidden: unable to create new content in namespace azurefile-6183 because it is being terminated
I0315 21:47:00.821031 1 tokens_controller.go:252] syncServiceAccount(azurefile-6183/default), service account deleted, removing tokens
I0315 21:47:00.821306 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6183" (2µs)
I0315 21:47:00.821333 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-6183, name default, uid c230483f-9b11-4168-a344-a8728016065b, event type delete
I0315 21:47:00.837817 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-6183, name kube-root-ca.crt, uid 8cad6a15-af61-4f85-9b63-1d8f69349d68, event type delete
I0315 21:47:00.839048 1 publisher.go:186] Finished syncing namespace "azurefile-6183" (1.115192ms)
I0315 21:47:00.861662 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6183" (1.4µs)
... skipping 18 lines ...
I0315 21:47:12.959713 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:47:12.959789 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:47:12.959800 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:47:12.959809 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:47:12.959828 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:47:12.959905 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:47:12.965687 1 azure_provision.go:108] failed to get azure provider
I0315 21:47:12.965708 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:47:12.965737 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:47:14.965726363 +0000 UTC m=+1924.436722719 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:47:12.965818 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:47:13.665921 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="71.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49792" resp=200
I0315 21:47:18.426470 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0315 21:47:20.735188 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 0 items received
I0315 21:47:23.665103 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="67.599µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51406" resp=200
I0315 21:47:23.733399 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 1 items received
I0315 21:47:23.945835 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 5 lines ...
I0315 21:47:27.960256 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:47:27.960283 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:47:27.960307 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:47:27.960326 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:47:27.960348 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:47:27.960372 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:47:27.965237 1 azure_provision.go:108] failed to get azure provider
I0315 21:47:27.965259 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:47:27.965289 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:47:31.965277097 +0000 UTC m=+1941.436273353 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:47:27.965595 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:47:29.856059 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:47:31.433298 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0315 21:47:31.610178 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:47:32.733192 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 2 items received
I0315 21:47:33.666644 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="70.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49806" resp=200
I0315 21:47:40.481916 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 4 lines ...
I0315 21:47:42.961302 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:47:42.961328 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:47:42.961339 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:47:42.961347 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:47:42.961364 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:47:42.961374 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:47:42.965253 1 azure_provision.go:108] failed to get azure provider
I0315 21:47:42.966041 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:47:42.966183 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:47:50.966167943 +0000 UTC m=+1960.437164299 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:47:42.966515 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:47:43.665222 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="69.399µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53286" resp=200
I0315 21:47:45.716021 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 3 items received
I0315 21:47:46.671252 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:47:47.731779 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 0 items received
I0315 21:47:47.952060 1 gc_controller.go:161] GC'ing orphaned
I0315 21:47:47.952085 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 5 lines ...
I0315 21:47:57.961754 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:47:57.961816 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:47:57.961828 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:47:57.961837 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:47:57.961852 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:47:57.961863 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:47:57.963542 1 azure_provision.go:108] failed to get azure provider
I0315 21:47:57.963561 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:47:57.963584 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:48:13.963573719 +0000 UTC m=+1983.434569975 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:47:57.963862 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:47:59.872903 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:48:01.442100 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:48:01.816542 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0315 21:48:02.942069 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 76 items received
I0315 21:48:03.665046 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="70µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:58294" resp=200
I0315 21:48:07.741222 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 22 items received
... skipping 22 lines ...
I0315 21:48:27.963616 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:48:27.963672 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:48:27.963683 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:48:27.963693 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:48:27.963709 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:48:27.963718 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:48:27.968275 1 azure_provision.go:108] failed to get azure provider
I0315 21:48:27.968298 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:48:27.968368 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:48:59.968354324 +0000 UTC m=+2029.439350580 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:48:27.968425 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:48:28.084711 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0315 21:48:29.887849 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:48:32.681702 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0315 21:48:33.664758 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="88.999µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:39894" resp=200
I0315 21:48:35.951379 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:48:42.801096 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 31 lines ...
I0315 21:49:12.965726 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:49:12.965756 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:49:12.965789 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:49:12.965804 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:49:12.965822 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:49:12.965828 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:49:12.975245 1 azure_provision.go:108] failed to get azure provider
I0315 21:49:12.975267 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:49:12.975338 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:50:16.975325365 +0000 UTC m=+2106.446321721 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:49:12.975398 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:49:13.665819 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="68.599µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:40538" resp=200
I0315 21:49:15.573209 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:49:18.313854 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 3 items received
I0315 21:49:19.950439 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0315 21:49:23.665354 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="66.8µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53258" resp=200
I0315 21:49:27.803234 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 75 lines ...
I0315 21:50:27.968363 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0315 21:50:27.968432 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3532/pvc-zpv7n]: no volume found
I0315 21:50:27.968486 1 pv_controller.go:1455] provisionClaim[azurefile-3532/pvc-zpv7n]: started
I0315 21:50:27.968501 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]]
I0315 21:50:27.968521 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3532/pvc-zpv7n] started, class: "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv"
I0315 21:50:27.968560 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3532/pvc-zpv7n]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0315 21:50:27.970509 1 azure_provision.go:108] failed to get azure provider
I0315 21:50:27.970530 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3532/pvc-zpv7n" with StorageClass "azurefile-3532-kubernetes.io-azure-file-dynamic-sc-7rhtv": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0315 21:50:27.970562 1 goroutinemap.go:150] Operation for "provision-azurefile-3532/pvc-zpv7n[98a04efb-a89f-40ab-abdf-ef7cc9155e92]" failed. No retries permitted until 2023-03-15 21:52:29.970550933 +0000 UTC m=+2239.441547289 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0315 21:50:27.970858 1 event.go:294] "Event occurred" object="azurefile-3532/pvc-zpv7n" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0315 21:50:28.316418 1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0315 21:50:29.960032 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0315 21:50:29.989521 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-jtbghr-md-0-vnk86"
I0315 21:50:31.738359 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 6 items received
I0315 21:50:33.665611 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="66.6µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47320" resp=200
I0315 21:50:34.158190 1 node_lifecycle_controller.go:1046] Node capz-jtbghr-md-0-vnk86 ReadyCondition updated. Updating timestamp.
... skipping 110 lines ...
I0315 21:52:01.841382 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3532
I0315 21:52:01.865275 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-3532, name kube-root-ca.crt, uid c7ab6172-4c25-4796-9cc7-4338f6b98def, event type delete
I0315 21:52:01.867181 1 publisher.go:186] Finished syncing namespace "azurefile-3532" (1.866987ms)
I0315 21:52:01.873068 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-3532, name default-token-cpww2, uid 1710393c-e9f6-4ac9-bccf-3def194b617e, event type delete
I0315 21:52:01.882689 1 publisher.go:186] Finished syncing namespace "azurefile-7939" (4.825165ms)
I0315 21:52:01.885969 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7939" (8.030741ms)
E0315 21:52:01.894281 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3532/default: secrets "default-token-5p8hw" is forbidden: unable to create new content in namespace azurefile-3532 because it is being terminated
I0315 21:52:01.941878 1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-3532, name pvc-zpv7n.174cb5f2f2fd622a, uid d8be8b74-729d-475f-a118-535d46bda475, event type delete
I0315 21:52:01.989088 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-3532/pvc-zpv7n"
I0315 21:52:01.989125 1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-3532/pvc-zpv7n"
I0315 21:52:01.989134 1 pvc_protection_controller.go:230] "Looking for Pods using PVC in the Informer's cache" PVC="azurefile-3532/pvc-zpv7n"
I0315 21:52:01.989143 1 pvc_protection_controller.go:251] "No Pod using PVC was found in the Informer's cache" PVC="azurefile-3532/pvc-zpv7n"
I0315 21:52:01.989149 1 pvc_protection_controller.go:256] "Looking for Pods using PVC with a live list" PVC="azurefile-3532/pvc-zpv7n"
... skipping 17 lines ...
I0315 21:52:02.083401 1 namespace_controller.go:157] Content remaining in namespace azurefile-3532, waiting 8 seconds
I0315 21:52:02.296901 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7939" (2.4µs)
I0315 21:52:02.316427 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8779
I0315 21:52:02.361078 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5998" (28.60889ms)
I0315 21:52:02.361164 1 publisher.go:186] Finished syncing namespace "azurefile-5998" (28.829689ms)
I0315 21:52:02.403003 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8779, name default-token-5mklb, uid 71139ce7-96ff-4bff-825f-d98a7a77538b, event type delete
E0315 21:52:02.416035 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8779/default: secrets "default-token-l8fkp" is forbidden: unable to create new content in namespace azurefile-8779 because it is being terminated
I0315 21:52:02.452223 1 tokens_controller.go:252] syncServiceAccount(azurefile-8779/default), service account deleted, removing tokens
I0315 21:52:02.452257 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8779, name default, uid 9899cfc5-b58f-46e2-b800-15bafa845efe, event type delete
I0315 21:52:02.452390 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8779" (1.8µs)
I0315 21:52:02.558678 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-8779, name kube-root-ca.crt, uid 6ee96175-5766-4242-af13-914b61b0f1e3, event type delete
I0315 21:52:02.561157 1 publisher.go:186] Finished syncing namespace "azurefile-8779" (2.256184ms)
I0315 21:52:02.578451 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8779" (1.8µs)
... skipping 28 lines ...
I0315 21:52:03.602015 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4527" (2.1µs)
I0315 21:52:03.643330 1 publisher.go:186] Finished syncing namespace "azurefile-8208" (6.528652ms)
I0315 21:52:03.645689 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8208" (8.730936ms)
I0315 21:52:03.664732 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="73.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51812" resp=200
I0315 21:52:03.696155 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-2431
I0315 21:52:03.732833 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-2431, name default-token-dszcf, uid 9d766836-89b0-4551-ac80-16a98360c0c0, event type delete
E0315 21:52:03.746544 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-2431/default: secrets "default-token-fbtbk" is forbidden: unable to create new content in namespace azurefile-2431 because it is being terminated
I0315 21:52:03.756524 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-2431, name kube-root-ca.crt, uid 67402545-ebf3-43f6-be66-843e4b7badd1, event type delete
I0315 21:52:03.757808 1 publisher.go:186] Finished syncing namespace "azurefile-2431" (1.183291ms)
I0315 21:52:03.777530 1 tokens_controller.go:252] syncServiceAccount(azurefile-2431/default), service account deleted, removing tokens
I0315 21:52:03.777770 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2431" (1.7µs)
I0315 21:52:03.777792 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-2431, name default, uid a46b0a17-4507-4757-bfeb-f4c9fcf3650a, event type delete
I0315 21:52:03.818627 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2431" (2µs)
... skipping 27 lines ...
I0315 21:52:04.723375 1 namespace_controller.go:180] Finished syncing namespace "azurefile-6877" (140.333371ms)
I0315 21:52:04.901110 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7411" (3µs)
I0315 21:52:04.940830 1 publisher.go:186] Finished syncing namespace "azurefile-6426" (4.719465ms)
I0315 21:52:04.942573 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6426" (6.291354ms)
I0315 21:52:05.042329 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-379
I0315 21:52:05.069102 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-379, name default-token-s8l6x, uid 4f28730d-0059-490e-9789-8f0fd16978cf, event type delete
E0315 21:52:05.079311 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-379/default: secrets "default-token-qp7b2" is forbidden: unable to create new content in namespace azurefile-379 because it is being terminated
I0315 21:52:05.087248 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-379, name kube-root-ca.crt, uid f813762a-3597-4b4f-b084-1262b92473f6, event type delete
I0315 21:52:05.088482 1 publisher.go:186] Finished syncing namespace "azurefile-379" (1.204491ms)
I0315 21:52:05.145262 1 tokens_controller.go:252] syncServiceAccount(azurefile-379/default), service account deleted, removing tokens
I0315 21:52:05.145294 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-379, name default, uid 9de7fa48-df1e-43e8-86de-b6e1b279257e, event type delete
I0315 21:52:05.145420 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-379" (1.6µs)
I0315 21:52:05.164994 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-379" (2.2µs)
... skipping 58 lines ...
I0315 21:52:07.155299 1 publisher.go:186] Finished syncing namespace "azurefile-3084" (12.196217ms)
I0315 21:52:07.257435 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3532" (2.2µs)
I0315 21:52:07.257799 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-3532, estimate: 0, errors: <nil>
I0315 21:52:07.266513 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3532" (183.41135ms)
I0315 21:52:07.302558 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-7939
I0315 21:52:07.319410 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-7939, name default-token-jhzht, uid ebe7d1f4-b85f-4fb0-8750-e502dd4dacff, event type delete
E0315 21:52:07.329826 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-7939/default: serviceaccounts "default" not found
I0315 21:52:07.330270 1 tokens_controller.go:252] syncServiceAccount(azurefile-7939/default), service account deleted, removing tokens
I0315 21:52:07.330081 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7939" (1.9µs)
I0315 21:52:07.330497 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-7939, name default, uid f5f54886-ce01-4a92-be07-f1d9cc948d38, event type delete
I0315 21:52:07.335906 1 tokens_controller.go:252] syncServiceAccount(azurefile-7939/default), service account deleted, removing tokens
I0315 21:52:07.338061 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-7939, name kube-root-ca.crt, uid 249ecc8b-70b6-463c-af1a-5eb6076785af, event type delete
I0315 21:52:07.339672 1 publisher.go:186] Finished syncing namespace "azurefile-7939" (1.581289ms)
... skipping 6 lines ...
I0315 21:52:07.595583 1 publisher.go:186] Finished syncing namespace "azurefile-7056" (5.80686ms)
I0315 21:52:07.597734 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7056" (7.767047ms)
I0315 21:52:07.736095 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-5998
I0315 21:52:07.751773 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-5998, name kube-root-ca.crt, uid 589518a1-8ba8-43b1-aede-00bdd7447df8, event type delete
I0315 21:52:07.753163 1 publisher.go:186] Finished syncing namespace "azurefile-5998" (1.311891ms)
I0315 21:52:07.784927 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-5998, name default-token-84hvd, uid fc92ea74-fbe2-482f-9836-00a737cd83b1, event type delete
E0315 21:52:07.798829 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-5998/default: secrets "default-token-zcm94" is forbidden: unable to create new content in namespace azurefile-5998 because it is being terminated
I0315 21:52:07.831079 1 tokens_controller.go:252] syncServiceAccount(azurefile-5998/default), service account deleted, removing tokens
I0315 21:52:07.831215 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-5998, name default, uid bfabc074-e253-42c9-b669-4ec89da37107, event type delete
I0315 21:52:07.831269 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5998" (3.2µs)
I0315 21:52:07.856780 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5998" (2.2µs)
I0315 21:52:07.856969 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-5998, estimate: 0, errors: <nil>
I0315 21:52:07.865584 1 namespace_controller.go:180] Finished syncing namespace "azurefile-5998" (131.9263ms)
... skipping 3 lines ...
I0315 21:52:07.960823 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0315 21:52:07.982953 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7056" (2.4µs)
I0315 21:52:08.026831 1 publisher.go:186] Finished syncing namespace "azurefile-5364" (8.311043ms)
I0315 21:52:08.028937 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5364" (10.24083ms)
I0315 21:52:08.164649 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1296
I0315 21:52:08.244060 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-1296, name default-token-pfxp7, uid dbffa6df-6c5f-4454-8502-7604c7dcf0e1, event type delete
E0315 21:52:08.260774 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-1296/default: secrets "default-token-w6n66" is forbidden: unable to create new content in namespace azurefile-1296 because it is being terminated
I0315 21:52:08.261482 1 tokens_controller.go:252] syncServiceAccount(azurefile-1296/default), service account deleted, removing tokens
I0315 21:52:08.261615 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-1296, name default, uid 553902f6-1604-416a-a66a-11fa0f78df17, event type delete
I0315 21:52:08.261647 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1296" (13.9µs)
I0315 21:52:08.272181 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-1296, name kube-root-ca.crt, uid 18aa40d4-b01c-4611-b17c-de1501836c22, event type delete
I0315 21:52:08.273642 1 publisher.go:186] Finished syncing namespace "azurefile-1296" (1.300991ms)
I0315 21:52:08.308791 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1296" (1.9µs)
... skipping 35 lines ...
I0315 21:52:09.327486 1 publisher.go:186] Finished syncing namespace "azurefile-5651" (4.902265ms)
I0315 21:52:09.329177 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5651" (6.439256ms)
I0315 21:52:09.466318 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9825
I0315 21:52:09.491015 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-9825, name kube-root-ca.crt, uid 6fa17529-3607-4945-9a2a-2d65bac09723, event type delete
I0315 21:52:09.492541 1 publisher.go:186] Finished syncing namespace "azurefile-9825" (1.42579ms)
I0315 21:52:09.510603 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-9825, name default-token-ngj8x, uid ca113391-6910-49d9-bd59-4b29a7313023, event type delete
E0315 21:52:09.520559 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9825/default: secrets "default-token-m7hv8" is forbidden: unable to create new content in namespace azurefile-9825 because it is being terminated
I0315 21:52:09.540026 1 tokens_controller.go:252] syncServiceAccount(azurefile-9825/default), service account deleted, removing tokens
I0315 21:52:09.540148 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-9825, name default, uid da722d4e-a68c-4d28-b777-ba08d75a1e6b, event type delete
I0315 21:52:09.540221 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9825" (1.3µs)
I0315 21:52:09.586710 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9825" (2µs)
I0315 21:52:09.587066 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-9825, estimate: 0, errors: <nil>
I0315 21:52:09.599121 1 namespace_controller.go:180] Finished syncing namespace "azurefile-9825" (134.629463ms)
I0315 21:52:09.712957 1 namespace_controller.go:185] Namespace has been deleted azurefile-6877
I0315 21:52:09.712974 1 namespace_controller.go:180] Finished syncing namespace "azurefile-6877" (38.8µs)
I0315 21:52:09.723720 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5651" (2.5µs)
I0315 21:52:09.904002 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-7411
I0315 21:52:09.920144 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-7411, name default-token-24p7s, uid f01aa583-54a8-4fb7-b5c2-fc68130306c9, event type delete
E0315 21:52:09.930423 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-7411/default: secrets "default-token-xcxfr" is forbidden: unable to create new content in namespace azurefile-7411 because it is being terminated
I0315 21:52:09.950914 1 tokens_controller.go:252] syncServiceAccount(azurefile-7411/default), service account deleted, removing tokens
I0315 21:52:09.951132 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-7411, name default, uid 8c84d29e-9c67-4801-8bde-dc3156ab9478, event type delete
I0315 21:52:09.951149 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7411" (2.5µs)
I0315 21:52:10.016911 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-7411, name kube-root-ca.crt, uid 59d0527b-1a58-4de4-9b1d-7725020ff8bd, event type delete
I0315 21:52:10.019055 1 publisher.go:186] Finished syncing namespace "azurefile-7411" (2.112685ms)
I0315 21:52:10.029423 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7411" (2.1µs)
... skipping 25 lines ...
I0315 21:52:10.885388 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1189" (1.9µs)
I0315 21:52:10.899931 1 namespace_controller.go:180] Finished syncing namespace "azurefile-1189" (133.801269ms)
I0315 21:52:11.087038 1 namespace_controller.go:185] Namespace has been deleted azurefile-3305
I0315 21:52:11.087072 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3305" (66.8µs)
I0315 21:52:11.207726 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-4429
I0315 21:52:11.242658 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-4429, name default-token-z7qqb, uid 7ac2ee6b-0d62-42cb-8253-d8fa123c14ce, event type delete
E0315 21:52:11.253821 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-4429/default: secrets "default-token-r4bwh" is forbidden: unable to create new content in namespace azurefile-4429 because it is being terminated
2023/03/15 21:52:11 ===================================================
[38;5;243m------------------------------[0m
[38;5;10m[AfterSuite] PASSED [1.805 seconds][0m
[AfterSuite]
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 6 Failures:[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[1mRan 6 of 39 Specs in 1824.459 seconds[0m
[38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m6 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m33 Skipped[0m
[38;5;228mYou're using deprecated Ginkgo functionality:[0m
[38;5;228m=============================================[0m
[38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m
[1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m
[38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m
[38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m
--- FAIL: TestE2E (1824.46s)
FAIL
FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 1824.541s
FAIL
make: *** [Makefile:85: e2e-test] Error 1
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
capz-jtbghr-control-plane-99vcl Ready control-plane,master 36m v1.23.18-rc.0.1+500bcf6c2b6f54 10.0.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1104-azure containerd://1.6.18
capz-jtbghr-md-0-vnk86 Ready <none> 34m v1.23.18-rc.0.1+500bcf6c2b6f54 10.1.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1104-azure containerd://1.6.18
capz-jtbghr-md-0-vnl4c Ready <none> 34m v1.23.18-rc.0.1+500bcf6c2b6f54 10.1.0.5 <none> Ubuntu 18.04.6 LTS 5.4.0-1104-azure containerd://1.6.18
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-5db7789c6c-5zrb2 1/1 Running 0 35m 192.168.108.198 capz-jtbghr-control-plane-99vcl <none> <none>
... skipping 163 lines ...