Error lines from build-log.txt
... skipping 148 lines ...
Image Tag is c6c469c
Build Linux Azure amd64 cloud controller manager
make: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure'
make ARCH=amd64 build-ccm-image
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure'
docker buildx inspect img-builder > /dev/null || docker buildx create --name img-builder --use
ERROR: no builder "img-builder" found
img-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
... skipping 1642 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 143 lines ...
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.25.6 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
Unable to connect to the server: dial tcp 20.65.16.78:6443: i/o timeout
capz-7bsgqo-control-plane-78g6l NotReady <none> 1s v1.23.18-rc.0.7+1635c380b26a1d
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.25.6 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
namespace/calico-system created
Error from server (NotFound): configmaps "kubeadm-config" not found
Error from server (NotFound): configmaps "kubeadm-config" not found
error: no objects passed to apply
Installing Calico CNI via helm
Cluster CIDR is IPv4
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
Release "calico" does not exist. Installing it now.
NAME: calico
... skipping 324 lines ...
Mar 23 21:34:46.511: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:48.543: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:50.577: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:52.610: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:54.643: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:56.678: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:58.680: INFO: Unexpected error:
<*errors.errorString | 0xc000792e10>: {
s: "PersistentVolumeClaims [pvc-5jc6f] not all in phase Bound within 5m0s",
}
Mar 23 21:34:58.681: FAIL: PersistentVolumeClaims [pvc-5jc6f] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000853c70, {0x2896668?, 0xc0001f3860}, 0xc000b50b00, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_cmd_volume_tester.go:41 +0xed
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.3()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:149 +0x5f5
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/23/23 21:34:58.681[0m
[1mSTEP:[0m Destroying namespace "azurefile-3154" for this suite. [38;5;243m03/23/23 21:34:58.682[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.497 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:106[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 23 21:34:46.511: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:48.543: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:50.577: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:52.610: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:54.643: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:56.678: INFO: PersistentVolumeClaim pvc-5jc6f found but phase is Pending instead of Bound.
Mar 23 21:34:58.680: INFO: Unexpected error:
<*errors.errorString | 0xc000792e10>: {
s: "PersistentVolumeClaims [pvc-5jc6f] not all in phase Bound within 5m0s",
}
Mar 23 21:34:58.681: FAIL: PersistentVolumeClaims [pvc-5jc6f] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000853c70, {0x2896668?, 0xc0001f3860}, 0xc000b50b00, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002ae1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 236 lines ...
Mar 23 21:39:49.381: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:51.415: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:53.449: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:55.481: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:57.517: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:59.551: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:40:01.552: INFO: Unexpected error:
<*errors.errorString | 0xc000793ad0>: {
s: "PersistentVolumeClaims [pvc-mqvvj] not all in phase Bound within 5m0s",
}
Mar 23 21:40:01.553: FAIL: PersistentVolumeClaims [pvc-mqvvj] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000d53c90, {0x2896668?, 0xc000b924e0}, 0xc000bea000, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_collocated_pod_tester.go:40 +0x153
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.6()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:273 +0x5ed
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/23/23 21:40:01.554[0m
[1mSTEP:[0m Destroying namespace "azurefile-342" for this suite. [38;5;243m03/23/23 21:40:01.555[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.683 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:224[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 23 21:39:49.381: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:51.415: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:53.449: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:55.481: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:57.517: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:39:59.551: INFO: PersistentVolumeClaim pvc-mqvvj found but phase is Pending instead of Bound.
Mar 23 21:40:01.552: INFO: Unexpected error:
<*errors.errorString | 0xc000793ad0>: {
s: "PersistentVolumeClaims [pvc-mqvvj] not all in phase Bound within 5m0s",
}
Mar 23 21:40:01.553: FAIL: PersistentVolumeClaims [pvc-mqvvj] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000d53c90, {0x2896668?, 0xc000b924e0}, 0xc000bea000, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002ae1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 166 lines ...
Mar 23 21:44:51.059: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:53.093: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:55.126: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:57.161: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:59.194: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:45:01.226: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:45:03.227: INFO: Unexpected error:
<*errors.errorString | 0xc0003f5100>: {
s: "PersistentVolumeClaims [pvc-jkm4r] not all in phase Bound within 5m0s",
}
Mar 23 21:45:03.228: FAIL: PersistentVolumeClaims [pvc-jkm4r] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc00083dbe0, {0x2896668?, 0xc000b92680}, 0xc000ce8580, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_read_only_volume_tester.go:48 +0x13c
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.7()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:308 +0x365
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/23/23 21:45:03.228[0m
[1mSTEP:[0m Destroying namespace "azurefile-6538" for this suite. [38;5;243m03/23/23 21:45:03.229[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.670 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:277[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 23 21:44:51.059: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:53.093: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:55.126: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:57.161: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:44:59.194: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:45:01.226: INFO: PersistentVolumeClaim pvc-jkm4r found but phase is Pending instead of Bound.
Mar 23 21:45:03.227: INFO: Unexpected error:
<*errors.errorString | 0xc0003f5100>: {
s: "PersistentVolumeClaims [pvc-jkm4r] not all in phase Bound within 5m0s",
}
Mar 23 21:45:03.228: FAIL: PersistentVolumeClaims [pvc-jkm4r] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc00083dbe0, {0x2896668?, 0xc000b92680}, 0xc000ce8580, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002ae1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 166 lines ...
Mar 23 21:49:52.564: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:49:54.597: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:49:56.629: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:49:58.662: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:50:00.696: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:50:02.729: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:50:04.729: INFO: Unexpected error:
<*errors.errorString | 0xc0004e6de0>: {
s: "PersistentVolumeClaims [pvc-hmjkr] not all in phase Bound within 5m0s",
}
Mar 23 21:50:04.730: FAIL: PersistentVolumeClaims [pvc-hmjkr] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*PodDetails).SetupDeployment(0xc000a45ea8, {0x2896668?, 0xc000c391e0}, 0xc000acf340, {0x7f9f783518d8, 0xc00081ce40}, 0x7f9fa19b8f18?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:185 +0x495
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedDeletePodTest).Run(0xc000a45e98, {0x2896668?, 0xc000c391e0?}, 0x10?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_delete_pod_tester.go:45 +0x55
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.8()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:345 +0x434
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/23/23 21:50:04.73[0m
[1mSTEP:[0m Destroying namespace "azurefile-6841" for this suite. [38;5;243m03/23/23 21:50:04.731[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.500 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:311[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 23 21:49:52.564: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:49:54.597: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:49:56.629: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:49:58.662: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:50:00.696: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:50:02.729: INFO: PersistentVolumeClaim pvc-hmjkr found but phase is Pending instead of Bound.
Mar 23 21:50:04.729: INFO: Unexpected error:
<*errors.errorString | 0xc0004e6de0>: {
s: "PersistentVolumeClaims [pvc-hmjkr] not all in phase Bound within 5m0s",
}
Mar 23 21:50:04.730: FAIL: PersistentVolumeClaims [pvc-hmjkr] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*PodDetails).SetupDeployment(0xc000a45ea8, {0x2896668?, 0xc000c391e0}, 0xc000acf340, {0x7f9f783518d8, 0xc00081ce40}, 0x7f9fa19b8f18?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:185 +0x495
... skipping 10 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002ae1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 166 lines ...
Mar 23 21:54:54.091: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:54:56.124: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:54:58.158: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:00.191: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:02.223: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:04.257: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:06.257: INFO: Unexpected error:
<*errors.errorString | 0xc0008867a0>: {
s: "PersistentVolumeClaims [pvc-6nt2z] not all in phase Bound within 5m0s",
}
Mar 23 21:55:06.257: FAIL: PersistentVolumeClaims [pvc-6nt2z] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0009dfd90, {0x2896668?, 0xc000c39380}, 0xc000874580, {0x7f9f783518d8, 0xc00081ce40}, 0xc000312c00?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedReclaimPolicyTest).Run(0xc0009dfef8, {0x2896668, 0xc000c39380}, 0x7?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_reclaim_policy_tester.go:38 +0xd9
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.9()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:369 +0x285
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/23/23 21:55:06.258[0m
[1mSTEP:[0m Destroying namespace "azurefile-5280" for this suite. [38;5;243m03/23/23 21:55:06.258[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.527 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:348[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 23 21:54:54.091: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:54:56.124: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:54:58.158: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:00.191: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:02.223: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:04.257: INFO: PersistentVolumeClaim pvc-6nt2z found but phase is Pending instead of Bound.
Mar 23 21:55:06.257: INFO: Unexpected error:
<*errors.errorString | 0xc0008867a0>: {
s: "PersistentVolumeClaims [pvc-6nt2z] not all in phase Bound within 5m0s",
}
Mar 23 21:55:06.257: FAIL: PersistentVolumeClaims [pvc-6nt2z] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc0009dfd90, {0x2896668?, 0xc000c39380}, 0xc000874580, {0x7f9f783518d8, 0xc00081ce40}, 0xc000312c00?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 10 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002ae1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 201 lines ...
Mar 23 21:59:56.045: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 21:59:58.082: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:00.116: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:02.149: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:04.182: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:06.215: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:08.216: INFO: Unexpected error:
<*errors.errorString | 0xc0007938e0>: {
s: "PersistentVolumeClaims [pvc-qgk7x] not all in phase Bound within 5m0s",
}
Mar 23 22:00:08.216: FAIL: PersistentVolumeClaims [pvc-qgk7x] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000b59788, {0x2896668?, 0xc000b92b60}, 0xc000610160, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/dynamically_provisioned_resize_volume_tester.go:64 +0x10c
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.11()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:426 +0x2f5
[1mSTEP:[0m dump namespace information after failure [38;5;243m03/23/23 22:00:08.217[0m
[1mSTEP:[0m Destroying namespace "azurefile-572" for this suite. [38;5;243m03/23/23 22:00:08.217[0m
[38;5;243m------------------------------[0m
[38;5;9m• [FAILED] [301.464 seconds][0m
Dynamic Provisioning
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:43[0m
[38;5;9m[1m[It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:397[0m
[38;5;243mBegin Captured GinkgoWriter Output >>[0m
... skipping 154 lines ...
Mar 23 21:59:56.045: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 21:59:58.082: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:00.116: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:02.149: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:04.182: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:06.215: INFO: PersistentVolumeClaim pvc-qgk7x found but phase is Pending instead of Bound.
Mar 23 22:00:08.216: INFO: Unexpected error:
<*errors.errorString | 0xc0007938e0>: {
s: "PersistentVolumeClaims [pvc-qgk7x] not all in phase Bound within 5m0s",
}
Mar 23 22:00:08.216: FAIL: PersistentVolumeClaims [pvc-qgk7x] not all in phase Bound within 5m0s
Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).WaitForBound(_)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221 +0x19d
sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites.(*VolumeDetails).SetupDynamicPersistentVolumeClaim(0xc000b59788, {0x2896668?, 0xc000b92b60}, 0xc000610160, {0x7f9f783518d8, 0xc00081ce40}, 0x0?)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/specs.go:242 +0x6dc
... skipping 12 lines ...
[1mThere were additional failures detected after the initial failure:[0m
[38;5;13m[PANICKED][0m
[38;5;13mTest Panicked[0m
[38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m
[38;5;13mruntime error: invalid memory address or nil pointer dereference[0m
[38;5;13mFull Stack Trace[0m
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002ae1e0)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139
... skipping 873 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use a pre-provisioned volume and mount it by multiple pods [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:117[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/23/23 22:00:20.118[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:158[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/23/23 22:00:20.787[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use existing credentials in k8s cluster [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:186[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/23/23 22:00:21.245[0m
... skipping 26 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[0mPre-Provisioned[0m
[1mshould use provided credentials [file.csi.azure.com] [Windows][0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:230[0m
[1mSTEP:[0m Creating a kubernetes client [38;5;243m03/23/23 22:00:21.712[0m
... skipping 28 lines ...
[38;5;243m<< End Captured GinkgoWriter Output[0m
[38;5;14mtest case is only available for CSI drivers[0m
[38;5;14mIn [1m[It][0m[38;5;14m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:289[0m
[1mThere were additional failures detected after the initial failure:[0m
[38;5;9m[FAILED][0m
[38;5;9mcreate volume "" error: rpc error: code = InvalidArgument desc = Volume ID missing in request[0m
[38;5;9mIn [1m[AfterEach][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:74[0m
[38;5;243m------------------------------[0m
[1m[AfterSuite] [0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m
2023/03/23 22:00:24 ===================controller-manager log=======
print out all nodes status ...
... skipping 1747 lines ...
I0323 21:24:38.797003 1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1679606678\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1679606678\" (2023-03-23 20:24:37 +0000 UTC to 2024-03-22 20:24:37 +0000 UTC (now=2023-03-23 21:24:38.796986468 +0000 UTC))"
I0323 21:24:38.797108 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1679606678\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1679606678\" (2023-03-23 20:24:38 +0000 UTC to 2024-03-22 20:24:38 +0000 UTC (now=2023-03-23 21:24:38.797094077 +0000 UTC))"
I0323 21:24:38.797128 1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0323 21:24:38.797278 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0323 21:24:38.797596 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0323 21:24:38.796762 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0323 21:24:43.798062 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0323 21:24:43.798110 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
E0323 21:24:47.962443 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0323 21:24:47.962468 1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0323 21:24:50.170683 1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0323 21:24:50.171234 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-7bsgqo-control-plane-78g6l_86ce7aac-6e10-4445-bd51-4f6422e4dedf became leader"
I0323 21:24:51.373357 1 request.go:617] Waited for 95.996408ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/certificates.k8s.io/v1
I0323 21:24:51.375914 1 controllermanager.go:576] Starting "persistentvolume-binder"
I0323 21:24:51.376233 1 shared_informer.go:240] Waiting for caches to sync for tokens
I0323 21:24:51.376617 1 reflector.go:219] Starting reflector *v1.ServiceAccount (14h26m29.995671722s) from k8s.io/client-go/informers/factory.go:134
... skipping 14 lines ...
I0323 21:24:51.561697 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0323 21:24:51.561732 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0323 21:24:51.561746 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0323 21:24:51.561778 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0323 21:24:51.561796 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0323 21:24:51.561856 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0323 21:24:51.561966 1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0323 21:24:51.561980 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0323 21:24:51.562075 1 controllermanager.go:605] Started "persistentvolume-binder"
I0323 21:24:51.562088 1 controllermanager.go:576] Starting "podgc"
I0323 21:24:51.562187 1 pv_controller_base.go:310] Starting persistent volume controller
I0323 21:24:51.562201 1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0323 21:24:51.611627 1 controllermanager.go:605] Started "podgc"
... skipping 48 lines ...
I0323 21:24:52.723351 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
I0323 21:24:52.723608 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0323 21:24:52.723622 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0323 21:24:52.723671 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0323 21:24:52.723811 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0323 21:24:52.723828 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0323 21:24:52.723857 1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0323 21:24:52.723882 1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0323 21:24:52.724012 1 controllermanager.go:605] Started "attachdetach"
I0323 21:24:52.724026 1 controllermanager.go:576] Starting "replicaset"
I0323 21:24:52.724124 1 attach_detach_controller.go:328] Starting attach detach controller
I0323 21:24:52.724210 1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0323 21:24:52.906767 1 controllermanager.go:605] Started "replicaset"
... skipping 251 lines ...
I0323 21:24:57.368606 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0323 21:24:57.366995 1 daemon_controller.go:226] Adding daemon set kube-proxy
I0323 21:24:57.367022 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-control-plane-78g6l"
I0323 21:24:57.368509 1 disruption.go:415] addPod called on pod "etcd-capz-7bsgqo-control-plane-78g6l"
I0323 21:24:57.369118 1 disruption.go:490] No PodDisruptionBudgets found for pod etcd-capz-7bsgqo-control-plane-78g6l, PodDisruptionBudget controller will avoid syncing.
I0323 21:24:57.369234 1 disruption.go:418] No matching pdb for pod "etcd-capz-7bsgqo-control-plane-78g6l"
W0323 21:24:57.369073 1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7bsgqo-control-plane-78g6l" does not exist
I0323 21:24:57.369203 1 shared_informer.go:270] caches populated
I0323 21:24:57.369769 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0323 21:24:57.369209 1 shared_informer.go:270] caches populated
I0323 21:24:57.369962 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0323 21:24:57.369215 1 shared_informer.go:270] caches populated
I0323 21:24:57.370147 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
... skipping 271 lines ...
I0323 21:24:57.876222 1 request.go:617] Waited for 467.180773ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1/priorityclasses?limit=500&resourceVersion=0
I0323 21:24:57.876397 1 controller_utils.go:206] Controller kube-system/kube-proxy either never recorded expectations, or the ttl expired.
I0323 21:24:57.876460 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eca743dab35, ext:20015448627, loc:(*time.Location)(0x72c0b80)}}
I0323 21:24:57.876480 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-7bsgqo-control-plane-78g6l], creating 1
I0323 21:24:57.876732 1 publisher.go:186] Finished syncing namespace "kube-node-lease" (100.519289ms)
I0323 21:24:57.883380 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="482.68131ms"
I0323 21:24:57.883408 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:24:57.883433 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2023-03-23 21:24:57.883421708 +0000 UTC m=+20.022413578"
I0323 21:24:57.885137 1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2023-03-23 21:24:57 +0000 UTC - now: 2023-03-23 21:24:57.885130346 +0000 UTC m=+20.024122316]
I0323 21:24:57.885532 1 serviceaccounts_controller.go:188] Finished syncing namespace "kube-node-lease" (109.552185ms)
I0323 21:24:57.909452 1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamhandles
I0323 21:24:57.923410 1 request.go:617] Waited for 514.223698ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/flowcontrol.apiserver.k8s.io/v1beta2/flowschemas?limit=500&resourceVersion=0
I0323 21:24:57.951987 1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamhandles
... skipping 156 lines ...
I0323 21:25:00.140096 1 disruption.go:418] No matching pdb for pod "tigera-operator-6bbf97c9cf-ff4d5"
I0323 21:25:00.140156 1 taint_manager.go:401] "Noticed pod update" pod="calico-system/tigera-operator-6bbf97c9cf-ff4d5"
I0323 21:25:00.140235 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="calico-system/tigera-operator-6bbf97c9cf-ff4d5" podUID=1032e1fe-7039-4094-afd9-2bd74e17d728
I0323 21:25:00.140366 1 controller_utils.go:581] Controller tigera-operator-6bbf97c9cf created pod tigera-operator-6bbf97c9cf-ff4d5
I0323 21:25:00.140460 1 replica_set_utils.go:59] Updating status for : calico-system/tigera-operator-6bbf97c9cf, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0323 21:25:00.140589 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="32.998397ms"
I0323 21:25:00.140613 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/tigera-operator" err="Operation cannot be fulfilled on deployments.apps \"tigera-operator\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:00.140684 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-23 21:25:00.140670756 +0000 UTC m=+22.279662626"
I0323 21:25:00.140950 1 event.go:294] "Event occurred" object="calico-system/tigera-operator-6bbf97c9cf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: tigera-operator-6bbf97c9cf-ff4d5"
I0323 21:25:00.141047 1 deployment_util.go:775] Deployment "tigera-operator" timed out (false) [last progress check: 2023-03-23 21:25:00 +0000 UTC - now: 2023-03-23 21:25:00.141042151 +0000 UTC m=+22.280034021]
I0323 21:25:00.151430 1 replica_set.go:443] Pod tigera-operator-6bbf97c9cf-ff4d5 updated, objectMeta {Name:tigera-operator-6bbf97c9cf-ff4d5 GenerateName:tigera-operator-6bbf97c9cf- Namespace:calico-system SelfLink: UID:1032e1fe-7039-4094-afd9-2bd74e17d728 ResourceVersion:504 Generation:0 CreationTimestamp:2023-03-23 21:25:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:tigera-operator name:tigera-operator pod-template-hash:6bbf97c9cf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:tigera-operator-6bbf97c9cf UID:f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d Controller:0xc001688297 BlockOwnerDeletion:0xc001688298}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:00 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"tigera-operator\"}":{".":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"OPERATOR_NAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"TIGERA_OPERATOR_INIT_IMAGE_VERSION\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"WATCH_NAMESPACE\"}":{".":{},"f:name":{}}},"f:envFrom":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/lib/calico\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"var-lib-calico\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:}]} -> {Name:tigera-operator-6bbf97c9cf-ff4d5 GenerateName:tigera-operator-6bbf97c9cf- Namespace:calico-system SelfLink: UID:1032e1fe-7039-4094-afd9-2bd74e17d728 ResourceVersion:506 Generation:0 CreationTimestamp:2023-03-23 21:25:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:tigera-operator name:tigera-operator pod-template-hash:6bbf97c9cf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:tigera-operator-6bbf97c9cf UID:f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d Controller:0xc001689117 BlockOwnerDeletion:0xc001689118}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:00 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"tigera-operator\"}":{".":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"OPERATOR_NAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"TIGERA_OPERATOR_INIT_IMAGE_VERSION\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"WATCH_NAMESPACE\"}":{".":{},"f:name":{}}},"f:envFrom":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/lib/calico\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"var-lib-calico\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:}]}.
I0323 21:25:00.151523 1 disruption.go:427] updatePod called on pod "tigera-operator-6bbf97c9cf-ff4d5"
I0323 21:25:00.151601 1 disruption.go:490] No PodDisruptionBudgets found for pod tigera-operator-6bbf97c9cf-ff4d5, PodDisruptionBudget controller will avoid syncing.
... skipping 118 lines ...
I0323 21:25:02.487771 1 replica_set.go:443] Pod cloud-controller-manager-f4566f566-l4k99 updated, objectMeta {Name:cloud-controller-manager-f4566f566-l4k99 GenerateName:cloud-controller-manager-f4566f566- Namespace:kube-system SelfLink: UID:67b9d58c-1983-4f00-91ac-0dd47baf8ba3 ResourceVersion:549 Generation:0 CreationTimestamp:2023-03-23 21:25:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:cloud-controller-manager pod-template-hash:f4566f566 tier:control-plane] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:cloud-controller-manager-f4566f566 UID:cfd34737-80c0-4799-a01b-7e1b525d00c5 Controller:0xc0024f5137 BlockOwnerDeletion:0xc0024f5138}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:component":{},"f:pod-template-hash":{},"f:tier":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cfd34737-80c0-4799-a01b-7e1b525d00c5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"cloud-controller-manager\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/ssl\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/lib/waagent/ManagedIdentity-Settings\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:topologySpreadConstraints":{".":{},"k:{\"topologyKey\":\"kubernetes.io/hostname\",\"whenUnsatisfiable\":\"DoNotSchedule\"}":{".":{},"f:labelSelector":{},"f:maxSkew":{},"f:topologyKey":{},"f:whenUnsatisfiable":{}}},"f:volumes":{".":{},"k:{\"name\":\"etc-kubernetes\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"msi\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"ssl-mount\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:}]} -> {Name:cloud-controller-manager-f4566f566-l4k99 GenerateName:cloud-controller-manager-f4566f566- Namespace:kube-system SelfLink: UID:67b9d58c-1983-4f00-91ac-0dd47baf8ba3 ResourceVersion:553 Generation:0 CreationTimestamp:2023-03-23 21:25:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:cloud-controller-manager pod-template-hash:f4566f566 tier:control-plane] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:cloud-controller-manager-f4566f566 UID:cfd34737-80c0-4799-a01b-7e1b525d00c5 Controller:0xc0025ae9b7 BlockOwnerDeletion:0xc0025ae9b8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:component":{},"f:pod-template-hash":{},"f:tier":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cfd34737-80c0-4799-a01b-7e1b525d00c5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"cloud-controller-manager\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/ssl\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/lib/waagent/ManagedIdentity-Settings\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:topologySpreadConstraints":{".":{},"k:{\"topologyKey\":\"kubernetes.io/hostname\",\"whenUnsatisfiable\":\"DoNotSchedule\"}":{".":{},"f:labelSelector":{},"f:maxSkew":{},"f:topologyKey":{},"f:whenUnsatisfiable":{}}},"f:volumes":{".":{},"k:{\"name\":\"etc-kubernetes\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"msi\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"ssl-mount\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0323 21:25:02.487884 1 disruption.go:427] updatePod called on pod "cloud-controller-manager-f4566f566-l4k99"
I0323 21:25:02.487910 1 disruption.go:490] No PodDisruptionBudgets found for pod cloud-controller-manager-f4566f566-l4k99, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:02.487915 1 disruption.go:430] No matching pdb for pod "cloud-controller-manager-f4566f566-l4k99"
I0323 21:25:02.487956 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/cloud-controller-manager-f4566f566"
I0323 21:25:02.488274 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/cloud-controller-manager" duration="72.008258ms"
I0323 21:25:02.488307 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/cloud-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"cloud-controller-manager\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:02.488329 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/cloud-controller-manager" startTime="2023-03-23 21:25:02.488319688 +0000 UTC m=+24.627311558"
I0323 21:25:02.488813 1 deployment_util.go:775] Deployment "cloud-controller-manager" timed out (false) [last progress check: 2023-03-23 21:25:02 +0000 UTC - now: 2023-03-23 21:25:02.48880888 +0000 UTC m=+24.627800750]
I0323 21:25:02.489085 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/cloud-controller-manager-f4566f566" (53.259755ms)
I0323 21:25:02.489109 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-controller-manager-f4566f566", timestamp:time.Time{wall:0xc0ff4ecb99fa9ac4, ext:24574845890, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:02.489188 1 replica_set_utils.go:59] Updating status for : kube-system/cloud-controller-manager-f4566f566, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0323 21:25:02.503119 1 disruption.go:427] updatePod called on pod "kube-scheduler-capz-7bsgqo-control-plane-78g6l"
... skipping 4 lines ...
I0323 21:25:02.515040 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/cloud-controller-manager-f4566f566"
I0323 21:25:02.515060 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/cloud-controller-manager"
I0323 21:25:02.515198 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/cloud-controller-manager-f4566f566" (26.095486ms)
I0323 21:25:02.515213 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-controller-manager-f4566f566", timestamp:time.Time{wall:0xc0ff4ecb99fa9ac4, ext:24574845890, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:02.515351 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/cloud-controller-manager-f4566f566" (141.297µs)
I0323 21:25:02.521287 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/cloud-controller-manager" duration="6.449697ms"
I0323 21:25:02.521315 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/cloud-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"cloud-controller-manager\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:02.521353 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/cloud-controller-manager" startTime="2023-03-23 21:25:02.521340764 +0000 UTC m=+24.660332634"
I0323 21:25:02.553546 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/cloud-controller-manager" duration="32.18919ms"
I0323 21:25:02.553593 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/cloud-controller-manager"
I0323 21:25:02.553609 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/cloud-controller-manager" startTime="2023-03-23 21:25:02.553566553 +0000 UTC m=+24.692558423"
I0323 21:25:02.557140 1 deployment_util.go:775] Deployment "cloud-controller-manager" timed out (false) [last progress check: 2023-03-23 21:25:02 +0000 UTC - now: 2023-03-23 21:25:02.557132897 +0000 UTC m=+24.696124867]
I0323 21:25:02.557178 1 progress.go:195] Queueing up deployment "cloud-controller-manager" for a progress check after 599s
... skipping 107 lines ...
I0323 21:25:08.215827 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-cf64d56d8" (56.908103ms)
I0323 21:25:08.215861 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-cf64d56d8", timestamp:time.Time{wall:0xc0ff4ecd09795991, ext:30297939599, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:08.215960 1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-cf64d56d8, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0323 21:25:08.216217 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="70.970381ms"
I0323 21:25:08.216613 1 disruption.go:391] update DB "calico-typha"
I0323 21:25:08.216680 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (48.207µs)
I0323 21:25:08.216711 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-typha" err="Operation cannot be fulfilled on deployments.apps \"calico-typha\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:08.216760 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-23 21:25:08.216746057 +0000 UTC m=+30.355737927"
I0323 21:25:08.217284 1 deployment_util.go:775] Deployment "calico-typha" timed out (false) [last progress check: 2023-03-23 21:25:08 +0000 UTC - now: 2023-03-23 21:25:08.21726853 +0000 UTC m=+30.356260500]
I0323 21:25:08.227064 1 replica_set.go:443] Pod calico-typha-cf64d56d8-pcc6r updated, objectMeta {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:616 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc000f35937 BlockOwnerDeletion:0xc000f35938}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]} -> {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:618 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc001649b47 BlockOwnerDeletion:0xc001649b48}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]}.
I0323 21:25:08.227226 1 disruption.go:427] updatePod called on pod "calico-typha-cf64d56d8-pcc6r"
I0323 21:25:08.227243 1 disruption.go:433] updatePod "calico-typha-cf64d56d8-pcc6r" -> PDB "calico-typha"
I0323 21:25:08.227266 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (14.002µs)
... skipping 7 lines ...
I0323 21:25:08.238781 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-cf64d56d8", timestamp:time.Time{wall:0xc0ff4ecd09795991, ext:30297939599, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:08.238920 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-cf64d56d8" (142.72µs)
I0323 21:25:08.251174 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="13.124246ms"
I0323 21:25:08.251204 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-23 21:25:08.251193401 +0000 UTC m=+30.390185271"
I0323 21:25:08.251289 1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/calico-typha"
I0323 21:25:08.256565 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="5.361254ms"
I0323 21:25:08.256588 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-typha" err="Operation cannot be fulfilled on deployments.apps \"calico-typha\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:08.256655 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-23 21:25:08.256645868 +0000 UTC m=+30.395637838"
I0323 21:25:08.257123 1 deployment_util.go:775] Deployment "calico-typha" timed out (false) [last progress check: 2023-03-23 21:25:08 +0000 UTC - now: 2023-03-23 21:25:08.257119834 +0000 UTC m=+30.396111704]
I0323 21:25:08.257149 1 progress.go:195] Queueing up deployment "calico-typha" for a progress check after 599s
I0323 21:25:08.257178 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="526.274µs"
I0323 21:25:08.262320 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-23 21:25:08.262303163 +0000 UTC m=+30.401295033"
I0323 21:25:08.262800 1 deployment_util.go:775] Deployment "calico-typha" timed out (false) [last progress check: 2023-03-23 21:25:08 +0000 UTC - now: 2023-03-23 21:25:08.262796733 +0000 UTC m=+30.401788603]
... skipping 95 lines ...
I0323 21:25:08.780650 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="calico-system/calico-kube-controllers-fb49b9cf7-k69xp" podUID=5b1ac32d-b838-4905-8e8b-26a9e5852190
I0323 21:25:08.780838 1 disruption.go:427] updatePod called on pod "calico-kube-controllers-fb49b9cf7-k69xp"
I0323 21:25:08.780901 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-fb49b9cf7-k69xp, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:08.780978 1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-fb49b9cf7-k69xp"
I0323 21:25:08.780675 1 replica_set.go:443] Pod calico-kube-controllers-fb49b9cf7-k69xp updated, objectMeta {Name:calico-kube-controllers-fb49b9cf7-k69xp GenerateName:calico-kube-controllers-fb49b9cf7- Namespace:calico-system SelfLink: UID:5b1ac32d-b838-4905-8e8b-26a9e5852190 ResourceVersion:663 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb49b9cf7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-fb49b9cf7 UID:cd5e7a26-695a-4ce2-84a2-8f9536e154eb Controller:0xc002164e40 BlockOwnerDeletion:0xc002164e41}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd5e7a26-695a-4ce2-84a2-8f9536e154eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"FIPS_MODE_ENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBE_CONTROLLERS_CONFIG_NAME\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-fb49b9cf7-k69xp GenerateName:calico-kube-controllers-fb49b9cf7- Namespace:calico-system SelfLink: UID:5b1ac32d-b838-4905-8e8b-26a9e5852190 ResourceVersion:665 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb49b9cf7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-fb49b9cf7 UID:cd5e7a26-695a-4ce2-84a2-8f9536e154eb Controller:0xc002082c00 BlockOwnerDeletion:0xc002082c01}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd5e7a26-695a-4ce2-84a2-8f9536e154eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"FIPS_MODE_ENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBE_CONTROLLERS_CONFIG_NAME\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0323 21:25:08.787503 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-kube-controllers" duration="54.109309ms"
I0323 21:25:08.787638 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:08.787751 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-kube-controllers" startTime="2023-03-23 21:25:08.787738155 +0000 UTC m=+30.926730125"
I0323 21:25:08.788107 1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2023-03-23 21:25:08 +0000 UTC - now: 2023-03-23 21:25:08.788102506 +0000 UTC m=+30.927094376]
I0323 21:25:08.788458 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-kube-controllers-fb49b9cf7" (43.325692ms)
I0323 21:25:08.788574 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-kube-controllers-fb49b9cf7", timestamp:time.Time{wall:0xc0ff4ecd2c6a529c, ext:30884157338, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:08.788706 1 replica_set_utils.go:59] Updating status for : calico-system/calico-kube-controllers-fb49b9cf7, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0323 21:25:08.789078 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-kube-controllers-fb49b9cf7"
... skipping 28 lines ...
I0323 21:25:09.908223 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:09.908228 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecd7621da24, ext:32047180066, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:09.908251 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecd7622c9fb, ext:32047241565, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:09.908275 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:09.908302 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:09.908315 1 daemon_controller.go:1112] Updating daemon set status
E0323 21:25:09.911217 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:09.911231 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:09.911253 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:09.912015 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:09.912025 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:09.912040 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:09.912609 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:09.912621 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:09.912635 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:09.913208 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:09.913214 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:09.913227 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:09.928665 1 daemon_controller.go:247] Updating daemon set kube-proxy
I0323 21:25:09.932296 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (24.467202ms)
I0323 21:25:09.935368 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecd7622c9fb, ext:32047241565, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:09.935436 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecd77c193d7, ext:32074425045, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:09.935446 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:09.935470 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
... skipping 7 lines ...
I0323 21:25:10.317903 1 replica_set.go:443] Pod tigera-operator-6bbf97c9cf-ff4d5 updated, objectMeta {Name:tigera-operator-6bbf97c9cf-ff4d5 GenerateName:tigera-operator-6bbf97c9cf- Namespace:calico-system SelfLink: UID:1032e1fe-7039-4094-afd9-2bd74e17d728 ResourceVersion:566 Generation:0 CreationTimestamp:2023-03-23 21:25:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:tigera-operator name:tigera-operator pod-template-hash:6bbf97c9cf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:tigera-operator-6bbf97c9cf UID:f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d Controller:0xc00265ea57 BlockOwnerDeletion:0xc00265ea58}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:00 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"tigera-operator\"}":{".":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"OPERATOR_NAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"TIGERA_OPERATOR_INIT_IMAGE_VERSION\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"WATCH_NAMESPACE\"}":{".":{},"f:name":{}}},"f:envFrom":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/lib/calico\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"var-lib-calico\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:03 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:tigera-operator-6bbf97c9cf-ff4d5 GenerateName:tigera-operator-6bbf97c9cf- Namespace:calico-system SelfLink: UID:1032e1fe-7039-4094-afd9-2bd74e17d728 ResourceVersion:683 Generation:0 CreationTimestamp:2023-03-23 21:25:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:tigera-operator name:tigera-operator pod-template-hash:6bbf97c9cf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:tigera-operator-6bbf97c9cf UID:f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d Controller:0xc00250fa27 BlockOwnerDeletion:0xc00250fa28}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:00 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2479fca-3ac5-4fa8-a4ab-e90b34d6e38d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"tigera-operator\"}":{".":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"OPERATOR_NAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"TIGERA_OPERATOR_INIT_IMAGE_VERSION\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"WATCH_NAMESPACE\"}":{".":{},"f:name":{}}},"f:envFrom":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/lib/calico\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"var-lib-calico\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0323 21:25:10.318020 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/tigera-operator-6bbf97c9cf", timestamp:time.Time{wall:0xc0ff4ecb070d8244, ext:22257317698, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.318103 1 replica_set_utils.go:59] Updating status for : calico-system/tigera-operator-6bbf97c9cf, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->1, availableReplicas 0->1, sequence No: 1->1
I0323 21:25:10.318449 1 disruption.go:427] updatePod called on pod "tigera-operator-6bbf97c9cf-ff4d5"
I0323 21:25:10.318479 1 disruption.go:490] No PodDisruptionBudgets found for pod tigera-operator-6bbf97c9cf-ff4d5, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:10.318484 1 disruption.go:430] No matching pdb for pod "tigera-operator-6bbf97c9cf-ff4d5"
E0323 21:25:10.318695 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.318709 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.318726 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:10.319053 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.319063 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.319076 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:10.339814 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/tigera-operator-6bbf97c9cf"
I0323 21:25:10.340167 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/tigera-operator-6bbf97c9cf" (22.15029ms)
I0323 21:25:10.340257 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/tigera-operator-6bbf97c9cf", timestamp:time.Time{wall:0xc0ff4ecb070d8244, ext:22257317698, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.340333 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/tigera-operator-6bbf97c9cf" (86.999µs)
I0323 21:25:10.340390 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-23 21:25:10.340216924 +0000 UTC m=+32.479208894"
I0323 21:25:10.370209 1 deployment_controller.go:176] "Updating deployment" deployment="calico-system/tigera-operator"
... skipping 15 lines ...
I0323 21:25:10.668711 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-jfh5b"
I0323 21:25:10.668795 1 disruption.go:427] updatePod called on pod "kube-proxy-jfh5b"
I0323 21:25:10.668810 1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-jfh5b, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:10.668815 1 disruption.go:430] No matching pdb for pod "kube-proxy-jfh5b"
I0323 21:25:10.668832 1 daemon_controller.go:630] Pod kube-proxy-jfh5b deleted.
I0323 21:25:10.668859 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecda6e5babd, ext:32791581627, loc:(*time.Location)(0x72c0b80)}}
E0323 21:25:10.674902 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.674917 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.674940 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:10.675274 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.675288 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.675301 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:10.675550 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.675557 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.675570 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:10.676000 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.676008 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.676021 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:10.680539 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (37.525444ms)
I0323 21:25:10.681030 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecda6e5babd, ext:32791581627, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.681076 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecda8985a22, ext:32820065056, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.681086 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:10.681107 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:10.681111 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecda8985a22, ext:32820065056, loc:(*time.Location)(0x72c0b80)}}
... skipping 12 lines ...
I0323 21:25:10.699104 1 daemon_controller.go:1029] Pods to delete for daemon set cloud-node-manager: [], deleting 0
I0323 21:25:10.699109 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0ff4ecda9aaf12b, ext:32838060585, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.699178 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0ff4ecda9ac944e, ext:32838167884, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.699185 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [], creating 0
I0323 21:25:10.699898 1 daemon_controller.go:1029] Pods to delete for daemon set cloud-node-manager: [], deleting 0
I0323 21:25:10.699927 1 daemon_controller.go:1112] Updating daemon set status
E0323 21:25:10.700237 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:10.700252 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:10.700268 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:10.714277 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (33.71208ms)
I0323 21:25:10.714734 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecda8994356, ext:32820124856, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.714773 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecdaa9a89b2, ext:32853762736, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:10.714782 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:10.714800 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:10.714805 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecdaa9a89b2, ext:32853762736, loc:(*time.Location)(0x72c0b80)}}
... skipping 15 lines ...
I0323 21:25:10.715497 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:25:10.715518 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/cloud-node-manager" (443.796µs)
I0323 21:25:11.105303 1 disruption.go:427] updatePod called on pod "calico-node-55pgx"
I0323 21:25:11.105354 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-55pgx, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:11.105360 1 disruption.go:430] No matching pdb for pod "calico-node-55pgx"
I0323 21:25:11.105378 1 daemon_controller.go:570] Pod calico-node-55pgx updated.
E0323 21:25:11.106306 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.106323 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.106346 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.106811 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.106821 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.106833 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.107173 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.107182 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.107196 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.107532 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.107548 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.107559 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.107858 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.107866 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.107883 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.108178 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.108187 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.108205 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.108489 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.108503 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.108518 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.108813 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.108824 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.108841 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.109122 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.109130 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.109147 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.109445 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.109453 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.109470 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.109775 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.109785 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.109803 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:11.110173 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:11.110190 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:11.110202 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:11.113250 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0ff4ecd15d4aeab, ext:30505251853, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:11.113506 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0ff4ecdc6c3df19, ext:33252491899, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:11.113740 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0323 21:25:11.114030 1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0323 21:25:11.114107 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0ff4ecdc6c3df19, ext:33252491899, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:11.114277 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0ff4ecdc6cfae81, ext:33253265791, loc:(*time.Location)(0x72c0b80)}}
... skipping 37 lines ...
I0323 21:25:14.232521 1 replica_set.go:443] Pod calico-typha-cf64d56d8-pcc6r updated, objectMeta {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:618 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc001649b47 BlockOwnerDeletion:0xc001649b48}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]} -> {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:712 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc0028b85b0 BlockOwnerDeletion:0xc0028b85b1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0323 21:25:14.232700 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-cf64d56d8", timestamp:time.Time{wall:0xc0ff4ecd09795991, ext:30297939599, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:14.232816 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-cf64d56d8" (119.998µs)
I0323 21:25:14.233176 1 disruption.go:427] updatePod called on pod "calico-typha-cf64d56d8-pcc6r"
I0323 21:25:14.233225 1 disruption.go:433] updatePod "calico-typha-cf64d56d8-pcc6r" -> PDB "calico-typha"
I0323 21:25:14.233289 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (16.7µs)
E0323 21:25:14.233574 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:14.233610 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:14.233660 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:14.237523 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:14.237624 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:14.237688 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:14.238024 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:14.238093 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:14.238150 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:14.251985 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/csi-proxy" (3.042460752s)
I0323 21:25:14.300637 1 daemon_controller.go:394] ControllerRevision kube-proxy-windows-6465448bb8 added.
I0323 21:25:14.300677 1 endpointslicemirroring_controller.go:274] syncEndpoints("calico-system/calico-typha")
I0323 21:25:14.300687 1 endpointslicemirroring_controller.go:309] calico-system/calico-typha Service now has selector, cleaning up any mirrored EndpointSlices
I0323 21:25:14.300702 1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "calico-system/calico-typha" Endpoints. (39.3µs)
I0323 21:25:14.301700 1 controller_utils.go:206] Controller kube-system/kube-proxy-windows either never recorded expectations, or the ttl expired.
... skipping 102 lines ...
I0323 21:25:14.543930 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/metrics-server-85c7d488df-dzhdc" podUID=59122582-782f-4ff7-98ab-96e4b22de3bf
I0323 21:25:14.544010 1 replica_set.go:443] Pod metrics-server-85c7d488df-dzhdc updated, objectMeta {Name:metrics-server-85c7d488df-dzhdc GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:59122582-782f-4ff7-98ab-96e4b22de3bf ResourceVersion:750 Generation:0 CreationTimestamp:2023-03-23 21:25:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:45a26edc-a4b6-40f4-85e0-ef9982aa6f92 Controller:0xc002a7d2ee BlockOwnerDeletion:0xc002a7d2ef}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45a26edc-a4b6-40f4-85e0-ef9982aa6f92\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:metrics-server-85c7d488df-dzhdc GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:59122582-782f-4ff7-98ab-96e4b22de3bf ResourceVersion:756 Generation:0 CreationTimestamp:2023-03-23 21:25:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:45a26edc-a4b6-40f4-85e0-ef9982aa6f92 Controller:0xc002b66ea7 BlockOwnerDeletion:0xc002b66ea8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45a26edc-a4b6-40f4-85e0-ef9982aa6f92\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0323 21:25:14.544745 1 disruption.go:427] updatePod called on pod "metrics-server-85c7d488df-dzhdc"
I0323 21:25:14.544776 1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-85c7d488df-dzhdc, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:14.544798 1 disruption.go:430] No matching pdb for pod "metrics-server-85c7d488df-dzhdc"
I0323 21:25:14.544972 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="65.31535ms"
I0323 21:25:14.545094 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:25:14.545195 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-23 21:25:14.545181077 +0000 UTC m=+36.684172947"
I0323 21:25:14.545654 1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2023-03-23 21:25:14 +0000 UTC - now: 2023-03-23 21:25:14.545649229 +0000 UTC m=+36.684641099]
I0323 21:25:14.555799 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (18.828516ms)
I0323 21:25:14.555940 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0ff4ece9d2718fa, ext:36628093532, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:14.556032 1 controller_utils.go:938] Ignoring inactive pod kube-system/kube-proxy-jfh5b in state Running, deletion time 2023-03-23 21:25:40 +0000 UTC
I0323 21:25:14.556120 1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-85c7d488df, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 27 lines ...
I0323 21:25:15.423987 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:15.423991 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eced944ff69, ext:37562944103, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.424019 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eced945fe84, ext:37563009410, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.424025 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:15.424430 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:15.427795 1 daemon_controller.go:1112] Updating daemon set status
E0323 21:25:15.431832 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.431848 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.431883 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:15.438263 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-jfh5b" podUID=fb41f27c-562f-4080-8c51-0b54208dbe4c
I0323 21:25:15.438329 1 disruption.go:427] updatePod called on pod "kube-proxy-jfh5b"
I0323 21:25:15.438362 1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-jfh5b, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:15.438367 1 disruption.go:430] No matching pdb for pod "kube-proxy-jfh5b"
I0323 21:25:15.438413 1 daemon_controller.go:630] Pod kube-proxy-jfh5b deleted.
I0323 21:25:15.438419 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:-1, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eced945fe84, ext:37563009410, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.438466 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=pods, namespace kube-system, name kube-proxy-jfh5b, uid fb41f27c-562f-4080-8c51-0b54208dbe4c, event type update
E0323 21:25:15.443857 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.445180 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.445257 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:15.449217 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-jfh5b" podUID=fb41f27c-562f-4080-8c51-0b54208dbe4c
I0323 21:25:15.449718 1 deployment_controller.go:357] "Pod deleted" pod="kube-system/kube-proxy-jfh5b"
I0323 21:25:15.449758 1 disruption.go:456] deletePod called on pod "kube-proxy-jfh5b"
I0323 21:25:15.449779 1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-jfh5b, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:15.449788 1 disruption.go:459] No matching pdb for pod "kube-proxy-jfh5b"
I0323 21:25:15.449850 1 taint_manager.go:386] "Noticed pod deletion" pod="kube-system/kube-proxy-jfh5b"
I0323 21:25:15.449879 1 daemon_controller.go:630] Pod kube-proxy-jfh5b deleted.
I0323 21:25:15.449892 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:-2, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eced945fe84, ext:37563009410, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.450012 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=pods, namespace kube-system, name kube-proxy-jfh5b, uid fb41f27c-562f-4080-8c51-0b54208dbe4c, event type delete
E0323 21:25:15.451429 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.451447 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.451539 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.451907 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.451922 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.451940 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.452315 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.452356 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.452383 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.452653 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.452745 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.452811 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.453067 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.453081 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.453099 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.453377 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.453392 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.453447 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.458022 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.458040 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.458053 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.458335 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.458349 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.458362 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.458787 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.458887 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.458990 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.461086 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.461101 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.461282 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:15.473304 1 daemon_controller.go:247] Updating daemon set kube-proxy
I0323 21:25:15.473499 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (50.477913ms)
I0323 21:25:15.474046 1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (15.268698ms)
I0323 21:25:15.474228 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:-2, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eced945fe84, ext:37563009410, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.474281 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedc44eb49, ext:37613270699, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.474290 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-7bsgqo-control-plane-78g6l], creating 1
... skipping 23 lines ...
I0323 21:25:15.497942 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eceddadf9e0, ext:37636932830, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.497948 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:15.497969 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:15.497980 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:25:15.497999 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (671.474µs)
I0323 21:25:15.498499 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/kube-proxy-d8826"
E0323 21:25:15.498552 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.498564 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.498584 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.498812 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.498818 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.498829 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:15.501260 1 disruption.go:427] updatePod called on pod "kube-proxy-d8826"
I0323 21:25:15.501284 1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-d8826, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:15.501290 1 disruption.go:430] No matching pdb for pod "kube-proxy-d8826"
I0323 21:25:15.501335 1 controller_utils.go:122] "Update ready status of pods on node" node="capz-7bsgqo-control-plane-78g6l"
I0323 21:25:15.501462 1 daemon_controller.go:570] Pod kube-proxy-d8826 updated.
I0323 21:25:15.501954 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eceddadf9e0, ext:37636932830, loc:(*time.Location)(0x72c0b80)}}
... skipping 3 lines ...
I0323 21:25:15.502046 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4eceddec0354, ext:37640998482, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.502079 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedded197c, ext:37641069690, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.502087 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:15.502103 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:15.502113 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:25:15.502131 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (659.674µs)
E0323 21:25:15.502378 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.502388 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.502406 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.503282 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.503292 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.503306 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:15.514437 1 disruption.go:427] updatePod called on pod "kube-proxy-d8826"
I0323 21:25:15.514496 1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-d8826, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:15.514502 1 disruption.go:430] No matching pdb for pod "kube-proxy-d8826"
I0323 21:25:15.514530 1 daemon_controller.go:570] Pod kube-proxy-d8826 updated.
I0323 21:25:15.515004 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedded197c, ext:37641069690, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.515051 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedeb30666, ext:37654041032, loc:(*time.Location)(0x72c0b80)}}
... skipping 2 lines ...
I0323 21:25:15.515088 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedeb30666, ext:37654041032, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.515114 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedeb3fe15, ext:37654104339, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:15.515120 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:15.515174 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:15.515186 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:25:15.515204 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (665.074µs)
E0323 21:25:15.515433 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.515456 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.515473 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.515775 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.515784 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.515797 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.516078 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.516088 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.516101 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:15.516372 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:15.516381 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:15.516392 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:17.270027 1 disruption.go:427] updatePod called on pod "calico-typha-cf64d56d8-pcc6r"
I0323 21:25:17.270352 1 disruption.go:433] updatePod "calico-typha-cf64d56d8-pcc6r" -> PDB "calico-typha"
I0323 21:25:17.270479 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (27.703µs)
I0323 21:25:17.270643 1 replica_set.go:443] Pod calico-typha-cf64d56d8-pcc6r updated, objectMeta {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:712 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc0028b85b0 BlockOwnerDeletion:0xc0028b85b1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:789 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc002036830 BlockOwnerDeletion:0xc002036831}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0323 21:25:17.270937 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-cf64d56d8", timestamp:time.Time{wall:0xc0ff4ecd09795991, ext:30297939599, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.271122 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-cf64d56d8" (196.922µs)
E0323 21:25:17.271359 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.271495 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.271613 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:17.272002 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.272117 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.272232 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:17.272649 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.272804 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.272903 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:17.329360 1 disruption.go:427] updatePod called on pod "kube-proxy-d8826"
I0323 21:25:17.329390 1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-d8826, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:17.329395 1 disruption.go:430] No matching pdb for pod "kube-proxy-d8826"
I0323 21:25:17.329419 1 daemon_controller.go:570] Pod kube-proxy-d8826 updated.
E0323 21:25:17.329477 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.329488 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.329512 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:17.329691 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.329697 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.329707 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:17.329933 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecedeb3fe15, ext:37654104339, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.329982 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecf53ab17dd, ext:39468971839, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.329993 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:17.330016 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:17.330021 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecf53ab17dd, ext:39468971839, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.330048 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecf53ac1c0c, ext:39469038346, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.330055 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:17.330068 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:25:17.330078 1 daemon_controller.go:1112] Updating daemon set status
E0323 21:25:17.330464 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.330475 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.330492 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:17.330761 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.330769 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.330780 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:17.371985 1 daemon_controller.go:247] Updating daemon set kube-proxy
I0323 21:25:17.372146 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (42.711152ms)
I0323 21:25:17.372617 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecf53ac1c0c, ext:39469038346, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.372663 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecf56365b7e, ext:39511653088, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.372672 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:25:17.372698 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
... skipping 7 lines ...
I0323 21:25:17.378960 1 node_lifecycle_controller.go:868] Node capz-7bsgqo-control-plane-78g6l is NotReady as of 2023-03-23 21:25:17.378949203 +0000 UTC m=+39.517941173. Adding it to the Taint queue.
I0323 21:25:17.379018 1 replica_set.go:443] Pod calico-typha-cf64d56d8-pcc6r updated, objectMeta {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:789 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc002036830 BlockOwnerDeletion:0xc002036831}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-typha-cf64d56d8-pcc6r GenerateName:calico-typha-cf64d56d8- Namespace:calico-system SelfLink: UID:64070c0f-16b5-4a40-becb-41ac5f69af8b ResourceVersion:792 Generation:0 CreationTimestamp:2023-03-23 21:25:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/name:calico-typha k8s-app:calico-typha pod-template-hash:cf64d56d8] Annotations:map[hash.operator.tigera.io/system:bb4746872201725da2dea19756c475aa67d9c1e9 hash.operator.tigera.io/tigera-ca-private:2636e6008dd8a0ebca600b306bf1c739165ac8d8 hash.operator.tigera.io/typha-certs:dd7a72e8a592b85f714c69f160cc1a1d171dda4a] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-typha-cf64d56d8 UID:58d635fc-6096-488f-9106-30624b2aad71 Controller:0xc0016bfa50 BlockOwnerDeletion:0xc0016bfa51}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/system":{},"f:hash.operator.tigera.io/tigera-ca-private":{},"f:hash.operator.tigera.io/typha-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d635fc-6096-488f-9106-30624b2aad71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-typha\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CAFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CLIENTCN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_CONNECTIONREBALANCINGMODE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_DATASTORETYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_FIPSMODEENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHENABLED\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_HEALTHPORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_K8SNAMESPACE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGFILEPATH\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSCREEN\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_LOGSEVERITYSYS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERCERTFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SERVERKEYFILE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"TYPHA_SHUTDOWNTIMEOUTSECS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5473,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/tls/certs/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/typha-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tigera-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"typha-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0323 21:25:17.379116 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-cf64d56d8", timestamp:time.Time{wall:0xc0ff4ecd09795991, ext:30297939599, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:17.379198 1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-cf64d56d8, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->1, availableReplicas 0->1, sequence No: 1->1
I0323 21:25:17.379604 1 disruption.go:427] updatePod called on pod "calico-typha-cf64d56d8-pcc6r"
I0323 21:25:17.379620 1 disruption.go:433] updatePod "calico-typha-cf64d56d8-pcc6r" -> PDB "calico-typha"
E0323 21:25:17.379939 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.379948 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.379965 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:17.380151 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.380156 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.380166 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0323 21:25:17.380351 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0323 21:25:17.380356 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0323 21:25:17.380365 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0323 21:25:17.418147 1 endpointslice_controller.go:319] Finished syncing service "calico-system/calico-typha" endpoint slices. (38.740319ms)
I0323 21:25:17.419271 1 gc_controller.go:161] GC'ing orphaned
I0323 21:25:17.419284 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:25:17.424204 1 endpointslicemirroring_controller.go:274] syncEndpoints("calico-system/calico-typha")
I0323 21:25:17.424222 1 endpointslicemirroring_controller.go:309] calico-system/calico-typha Service now has selector, cleaning up any mirrored EndpointSlices
I0323 21:25:17.424240 1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "calico-system/calico-typha" Endpoints. (51.705µs)
... skipping 44 lines ...
I0323 21:25:25.484280 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="94.607µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:36602" resp=200
I0323 21:25:27.332331 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:25:27.380281 1 node_lifecycle_controller.go:868] Node capz-7bsgqo-control-plane-78g6l is NotReady as of 2023-03-23 21:25:27.380256346 +0000 UTC m=+49.519248316. Adding it to the Taint queue.
E0323 21:25:27.559828 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0323 21:25:27.560039 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:25:27.665198 1 pv_controller_base.go:556] resyncing PV controller
W0323 21:25:28.325052 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0323 21:25:28.632001 1 daemon_controller.go:570] Pod calico-node-55pgx updated.
I0323 21:25:28.632455 1 disruption.go:427] updatePod called on pod "calico-node-55pgx"
I0323 21:25:28.632575 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-55pgx, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:28.632653 1 disruption.go:430] No matching pdb for pod "calico-node-55pgx"
I0323 21:25:28.633411 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0ff4ed0931bb572, ext:44459574896, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:28.633569 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-node", timestamp:time.Time{wall:0xc0ff4ed225c374fd, ext:50772558431, loc:(*time.Location)(0x72c0b80)}}
... skipping 152 lines ...
I0323 21:25:42.066571 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-kube-controllers-fb49b9cf7", timestamp:time.Time{wall:0xc0ff4ecd2c6a529c, ext:30884157338, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:42.066642 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-kube-controllers-fb49b9cf7" (78.706µs)
I0323 21:25:42.066670 1 disruption.go:427] updatePod called on pod "calico-kube-controllers-fb49b9cf7-k69xp"
I0323 21:25:42.066683 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-fb49b9cf7-k69xp, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:42.066688 1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-fb49b9cf7-k69xp"
I0323 21:25:42.332575 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:25:42.382374 1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-7bsgqo-control-plane-78g6l transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-23 21:25:29 +0000 UTC,LastTransitionTime:2023-03-23 21:24:30 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-23 21:25:41 +0000 UTC,LastTransitionTime:2023-03-23 21:25:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0323 21:25:42.382458 1 node_lifecycle_controller.go:1046] Node capz-7bsgqo-control-plane-78g6l ReadyCondition updated. Updating timestamp.
I0323 21:25:42.382504 1 node_lifecycle_controller.go:892] Node capz-7bsgqo-control-plane-78g6l is healthy again, removing all taints
I0323 21:25:42.382519 1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0323 21:25:42.668378 1 pv_controller_base.go:556] resyncing PV controller
I0323 21:25:43.666932 1 replica_set.go:443] Pod metrics-server-85c7d488df-dzhdc updated, objectMeta {Name:metrics-server-85c7d488df-dzhdc GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:59122582-782f-4ff7-98ab-96e4b22de3bf ResourceVersion:857 Generation:0 CreationTimestamp:2023-03-23 21:25:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:45a26edc-a4b6-40f4-85e0-ef9982aa6f92 Controller:0xc001f35c1e BlockOwnerDeletion:0xc001f35c1f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45a26edc-a4b6-40f4-85e0-ef9982aa6f92\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:metrics-server-85c7d488df-dzhdc GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:59122582-782f-4ff7-98ab-96e4b22de3bf ResourceVersion:881 Generation:0 CreationTimestamp:2023-03-23 21:25:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:45a26edc-a4b6-40f4-85e0-ef9982aa6f92 Controller:0xc0027c244e BlockOwnerDeletion:0xc0027c244f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45a26edc-a4b6-40f4-85e0-ef9982aa6f92\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:43 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0323 21:25:43.667046 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0ff4ece9d2718fa, ext:36628093532, loc:(*time.Location)(0x72c0b80)}}
... skipping 95 lines ...
I0323 21:25:58.283631 1 replica_set.go:443] Pod coredns-bd6b6df9f-rjvrk updated, objectMeta {Name:coredns-bd6b6df9f-rjvrk GenerateName:coredns-bd6b6df9f- Namespace:kube-system SelfLink: UID:40880cbf-eadf-459a-bf3b-106285c91d58 ResourceVersion:868 Generation:0 CreationTimestamp:2023-03-23 21:24:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:bd6b6df9f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-bd6b6df9f UID:9d9a6e7d-d37f-44d6-a4d1-62124787b609 Controller:0xc0023c8200 BlockOwnerDeletion:0xc0023c8201}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:24:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9d9a6e7d-d37f-44d6-a4d1-62124787b609\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:24:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:coredns-bd6b6df9f-rjvrk GenerateName:coredns-bd6b6df9f- Namespace:kube-system SelfLink: UID:40880cbf-eadf-459a-bf3b-106285c91d58 ResourceVersion:950 Generation:0 CreationTimestamp:2023-03-23 21:24:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:bd6b6df9f] Annotations:map[cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs:] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-bd6b6df9f UID:9d9a6e7d-d37f-44d6-a4d1-62124787b609 Controller:0xc002c9f3e0 BlockOwnerDeletion:0xc002c9f3e1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:24:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9d9a6e7d-d37f-44d6-a4d1-62124787b609\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:24:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-23 21:25:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0323 21:25:58.283788 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-bd6b6df9f", timestamp:time.Time{wall:0xc0ff4eca62587857, ext:19715215189, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:58.283907 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-bd6b6df9f" (112.82µs)
I0323 21:25:58.283937 1 disruption.go:427] updatePod called on pod "coredns-bd6b6df9f-rjvrk"
I0323 21:25:58.283947 1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-bd6b6df9f-rjvrk, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:58.283953 1 disruption.go:430] No matching pdb for pod "coredns-bd6b6df9f-rjvrk"
W0323 21:25:58.353919 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0323 21:25:58.625996 1 replica_set.go:443] Pod metrics-server-85c7d488df-dzhdc updated, objectMeta {Name:metrics-server-85c7d488df-dzhdc GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:59122582-782f-4ff7-98ab-96e4b22de3bf ResourceVersion:940 Generation:0 CreationTimestamp:2023-03-23 21:25:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs:] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:45a26edc-a4b6-40f4-85e0-ef9982aa6f92 Controller:0xc002a560be BlockOwnerDeletion:0xc002a560bf}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45a26edc-a4b6-40f4-85e0-ef9982aa6f92\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:43 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-23 21:25:55 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]} -> {Name:metrics-server-85c7d488df-dzhdc GenerateName:metrics-server-85c7d488df- Namespace:kube-system SelfLink: UID:59122582-782f-4ff7-98ab-96e4b22de3bf ResourceVersion:953 Generation:0 CreationTimestamp:2023-03-23 21:25:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:85c7d488df] Annotations:map[cni.projectcalico.org/containerID:0e85b963811692e5f488ada390e7c6a7148354911b34089c886de273fdd7ef1c cni.projectcalico.org/podIP:192.168.169.193/32 cni.projectcalico.org/podIPs:192.168.169.193/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-85c7d488df UID:45a26edc-a4b6-40f4-85e0-ef9982aa6f92 Controller:0xc0020951e7 BlockOwnerDeletion:0xc0020951e8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45a26edc-a4b6-40f4-85e0-ef9982aa6f92\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2023-03-23 21:25:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2023-03-23 21:25:43 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2023-03-23 21:25:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0323 21:25:58.626118 1 disruption.go:427] updatePod called on pod "metrics-server-85c7d488df-dzhdc"
I0323 21:25:58.626144 1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-85c7d488df-dzhdc, PodDisruptionBudget controller will avoid syncing.
I0323 21:25:58.626149 1 disruption.go:430] No matching pdb for pod "metrics-server-85c7d488df-dzhdc"
I0323 21:25:58.626131 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-85c7d488df", timestamp:time.Time{wall:0xc0ff4ece9d2718fa, ext:36628093532, loc:(*time.Location)(0x72c0b80)}}
I0323 21:25:58.626216 1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-85c7d488df" (90.216µs)
... skipping 186 lines ...
I0323 21:26:07.301704 1 replica_set.go:443] Pod calico-apiserver-57bb57f4c5-jbmrf updated, objectMeta {Name:calico-apiserver-57bb57f4c5-jbmrf GenerateName:calico-apiserver-57bb57f4c5- Namespace:calico-apiserver SelfLink: UID:9107a98f-956e-4078-b277-8bdf463f5ffa ResourceVersion:1069 Generation:0 CreationTimestamp:2023-03-23 21:26:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57bb57f4c5] Annotations:map[hash.operator.tigera.io/calico-apiserver-certs:f0439f1e0dd51f1583d0fefb25009fb1ba052b21] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-apiserver-57bb57f4c5 UID:b3fab7e8-d1fb-4d86-83ed-c13349a96703 Controller:0xc0015b9f2e BlockOwnerDeletion:0xc0015b9f2f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:26:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/calico-apiserver-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fab7e8-d1fb-4d86-83ed-c13349a96703\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-apiserver\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"MULTI_INTERFACE_MODE\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/calico-apiserver-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"calico-apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]} -> {Name:calico-apiserver-57bb57f4c5-jbmrf GenerateName:calico-apiserver-57bb57f4c5- Namespace:calico-apiserver SelfLink: UID:9107a98f-956e-4078-b277-8bdf463f5ffa ResourceVersion:1074 Generation:0 CreationTimestamp:2023-03-23 21:26:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57bb57f4c5] Annotations:map[hash.operator.tigera.io/calico-apiserver-certs:f0439f1e0dd51f1583d0fefb25009fb1ba052b21] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-apiserver-57bb57f4c5 UID:b3fab7e8-d1fb-4d86-83ed-c13349a96703 Controller:0xc00252ca67 BlockOwnerDeletion:0xc00252ca68}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:26:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:hash.operator.tigera.io/calico-apiserver-certs":{}},"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app.kubernetes.io/name":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fab7e8-d1fb-4d86-83ed-c13349a96703\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"calico-apiserver\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_HOST\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"KUBERNETES_SERVICE_PORT\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"MULTI_INTERFACE_MODE\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/calico-apiserver-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"calico-apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}} Subresource:}]}.
I0323 21:26:07.301926 1 disruption.go:427] updatePod called on pod "calico-apiserver-57bb57f4c5-jbmrf"
I0323 21:26:07.301946 1 disruption.go:490] No PodDisruptionBudgets found for pod calico-apiserver-57bb57f4c5-jbmrf, PodDisruptionBudget controller will avoid syncing.
I0323 21:26:07.301952 1 disruption.go:430] No matching pdb for pod "calico-apiserver-57bb57f4c5-jbmrf"
I0323 21:26:07.302034 1 taint_manager.go:401] "Noticed pod update" pod="calico-apiserver/calico-apiserver-57bb57f4c5-jbmrf"
I0323 21:26:07.305772 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-apiserver/calico-apiserver" duration="50.214422ms"
I0323 21:26:07.305932 1 deployment_controller.go:490] "Error syncing deployment" deployment="calico-apiserver/calico-apiserver" err="Operation cannot be fulfilled on deployments.apps \"calico-apiserver\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:26:07.306048 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-apiserver/calico-apiserver" startTime="2023-03-23 21:26:07.306032126 +0000 UTC m=+89.445023996"
I0323 21:26:07.306568 1 deployment_util.go:775] Deployment "calico-apiserver" timed out (false) [last progress check: 2023-03-23 21:26:07 +0000 UTC - now: 2023-03-23 21:26:07.306560531 +0000 UTC m=+89.445552501]
I0323 21:26:07.307100 1 endpoints_controller.go:551] Update endpoints for calico-apiserver/calico-api, ready: 0 not ready: 0
I0323 21:26:07.308716 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="calico-apiserver/calico-apiserver-57bb57f4c5-7mmxs" podUID=5944eb4f-3f37-4d06-8c33-1d2453d1d6ef
I0323 21:26:07.308773 1 replica_set.go:380] Pod calico-apiserver-57bb57f4c5-7mmxs created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-apiserver-57bb57f4c5-7mmxs", GenerateName:"calico-apiserver-57bb57f4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5944eb4f-3f37-4d06-8c33-1d2453d1d6ef", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2023, time.March, 23, 21, 26, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bb57f4c5"}, Annotations:map[string]string{"hash.operator.tigera.io/calico-apiserver-certs":"f0439f1e0dd51f1583d0fefb25009fb1ba052b21"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-apiserver-57bb57f4c5", UID:"b3fab7e8-d1fb-4d86-83ed-c13349a96703", Controller:(*bool)(0xc00252d97e), BlockOwnerDeletion:(*bool)(0xc00252d97f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 23, 21, 26, 7, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001447038), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"calico-apiserver-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0016359c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-mm6kd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00024bd60), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-apiserver", Image:"docker.io/calico/apiserver:v3.25.0", Command:[]string(nil), Args:[]string{"--secure-port=5443", "--tls-private-key-file=/calico-apiserver-certs/tls.key", "--tls-cert-file=/calico-apiserver-certs/tls.crt"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.96.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"MULTI_INTERFACE_MODE", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"calico-apiserver-certs", ReadOnly:true, MountPath:"/calico-apiserver-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mm6kd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001635a80), ReadinessProbe:(*v1.Probe)(0xc001635ac0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0025d8a80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00252da98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-apiserver", DeprecatedServiceAccount:"calico-apiserver", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003e3c70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001447080), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00252db70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00252db90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00252db98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00252db9c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0027a0e10), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0323 21:26:07.309131 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-apiserver/calico-apiserver-57bb57f4c5", timestamp:time.Time{wall:0xc0ff4edbcfe05e19, ext:89405354263, loc:(*time.Location)(0x72c0b80)}}
... skipping 334 lines ...
I0323 21:26:49.074855 1 certificate_controller.go:87] Updating certificate request csr-z6q67
I0323 21:26:49.075532 1 certificate_controller.go:173] Finished syncing certificate request "csr-z6q67" (9.042041ms)
I0323 21:26:49.075630 1 certificate_controller.go:173] Finished syncing certificate request "csr-z6q67" (1.1µs)
I0323 21:26:53.963266 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-7bsgqo-md-0-gzp2s}
I0323 21:26:53.963291 1 taint_manager.go:441] "Updating known taints on node" node="capz-7bsgqo-md-0-gzp2s" taints=[]
I0323 21:26:53.963307 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-md-0-gzp2s"
W0323 21:26:53.963318 1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7bsgqo-md-0-gzp2s" does not exist
I0323 21:26:53.965504 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ecf563758a5, ext:39511717795, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:53.965789 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ee77990b1d6, ext:136104775992, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:53.965807 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-7bsgqo-md-0-gzp2s], creating 1
I0323 21:26:53.966396 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0ff4ecdaaa51c50, ext:32854455630, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:53.966444 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/cloud-node-manager", timestamp:time.Time{wall:0xc0ff4ee7799abe2a, ext:136105434508, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:53.966476 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set cloud-node-manager: [capz-7bsgqo-md-0-gzp2s], creating 1
... skipping 83 lines ...
I0323 21:26:54.152371 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ee78914f5f6, ext:136291360600, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:54.152379 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0323 21:26:54.152395 1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0323 21:26:54.152428 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:26:54.152457 1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (1.282832ms)
I0323 21:26:54.157750 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-md-0-9mzmk"
W0323 21:26:54.157905 1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7bsgqo-md-0-9mzmk" does not exist
I0323 21:26:54.158801 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-7bsgqo-md-0-9mzmk}
I0323 21:26:54.158943 1 taint_manager.go:441] "Updating known taints on node" node="capz-7bsgqo-md-0-9mzmk" taints=[]
I0323 21:26:54.159695 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ee78914f5f6, ext:136291360600, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:54.159916 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0ff4ee789881245, ext:136298904487, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:54.160014 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-7bsgqo-md-0-9mzmk], creating 1
I0323 21:26:54.167064 1 controller_utils.go:581] Controller calico-node created pod calico-node-72rvs
... skipping 219 lines ...
I0323 21:26:56.712334 1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/calico-typha-cf64d56d8", timestamp:time.Time{wall:0xc0ff4ee827d41b5a, ext:138807204028, loc:(*time.Location)(0x72c0b80)}}
I0323 21:26:56.712427 1 replica_set_utils.go:59] Updating status for : calico-system/calico-typha-cf64d56d8, replicas 1->2 (need 2), fullyLabeledReplicas 1->2, readyReplicas 1->1, availableReplicas 1->1, sequence No: 2->2
I0323 21:26:56.712597 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="17.342627ms"
I0323 21:26:56.712733 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-23 21:26:56.712719953 +0000 UTC m=+138.851711923"
I0323 21:26:56.713250 1 progress.go:195] Queueing up deployment "calico-typha" for a progress check after 500s
I0323 21:26:56.712695 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (3.084576ms)
E0323 21:26:56.713484 1 disruption.go:534] Error syncing PodDisruptionBudget calico-system/calico-typha, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy "calico-typha": the object has been modified; please apply your changes to the latest version and try again
I0323 21:26:56.713552 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (47.301µs)
I0323 21:26:56.713463 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/calico-typha" duration="732.618µs"
I0323 21:26:56.718858 1 disruption.go:558] Finished syncing PodDisruptionBudget "calico-system/calico-typha" (22.601µs)
I0323 21:26:56.720957 1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="calico-system/calico-typha-cf64d56d8"
I0323 21:26:56.720986 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/calico-typha" startTime="2023-03-23 21:26:56.720976256 +0000 UTC m=+138.859968126"
I0323 21:26:56.722044 1 replica_set.go:653] Finished syncing ReplicaSet "calico-system/calico-typha-cf64d56d8" (9.712439ms)
... skipping 379 lines ...
I0323 21:27:24.954234 1 daemon_controller.go:1029] Pods to delete for daemon set csi-node-driver: [], deleting 0
I0323 21:27:24.954250 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:27:24.954313 1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/csi-node-driver" (1.47183ms)
I0323 21:27:25.034353 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-md-0-9mzmk"
I0323 21:27:25.484638 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.002µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60780" resp=200
I0323 21:27:27.338232 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:27:27.397361 1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-7bsgqo-md-0-gzp2s transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-23 21:26:54 +0000 UTC,LastTransitionTime:2023-03-23 21:26:51 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-23 21:27:24 +0000 UTC,LastTransitionTime:2023-03-23 21:27:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0323 21:27:27.397452 1 node_lifecycle_controller.go:1046] Node capz-7bsgqo-md-0-gzp2s ReadyCondition updated. Updating timestamp.
I0323 21:27:27.418771 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-md-0-gzp2s"
I0323 21:27:27.419239 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-7bsgqo-md-0-gzp2s}
I0323 21:27:27.419973 1 taint_manager.go:441] "Updating known taints on node" node="capz-7bsgqo-md-0-gzp2s" taints=[]
I0323 21:27:27.420102 1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-7bsgqo-md-0-gzp2s"
I0323 21:27:27.420638 1 node_lifecycle_controller.go:892] Node capz-7bsgqo-md-0-gzp2s is healthy again, removing all taints
... skipping 85 lines ...
I0323 21:27:35.349748 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"calico-system/csi-node-driver", timestamp:time.Time{wall:0xc0ff4ef1d4d8b3b0, ext:177488737966, loc:(*time.Location)(0x72c0b80)}}
I0323 21:27:35.349756 1 daemon_controller.go:967] Nodes needing daemon pods for daemon set csi-node-driver: [], creating 0
I0323 21:27:35.349773 1 daemon_controller.go:1029] Pods to delete for daemon set csi-node-driver: [], deleting 0
I0323 21:27:35.349797 1 daemon_controller.go:1112] Updating daemon set status
I0323 21:27:35.349845 1 daemon_controller.go:1172] Finished syncing daemon set "calico-system/csi-node-driver" (725.317µs)
I0323 21:27:35.484650 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.902µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:58620" resp=200
I0323 21:27:37.422415 1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-7bsgqo-md-0-9mzmk transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-03-23 21:27:25 +0000 UTC,LastTransitionTime:2023-03-23 21:26:54 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-23 21:27:35 +0000 UTC,LastTransitionTime:2023-03-23 21:27:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0323 21:27:37.422475 1 node_lifecycle_controller.go:1046] Node capz-7bsgqo-md-0-9mzmk ReadyCondition updated. Updating timestamp.
I0323 21:27:37.424480 1 gc_controller.go:161] GC'ing orphaned
I0323 21:27:37.424506 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:27:37.441242 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-md-0-9mzmk"
I0323 21:27:37.441438 1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-7bsgqo-md-0-9mzmk}
I0323 21:27:37.441483 1 taint_manager.go:441] "Updating known taints on node" node="capz-7bsgqo-md-0-9mzmk" taints=[]
... skipping 198 lines ...
I0323 21:28:01.868776 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0ff4ef873c80a54, ext:204007739830, loc:(*time.Location)(0x72c0b80)}}
I0323 21:28:01.868876 1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7b7f546c46" need=2 creating=2
I0323 21:28:01.868934 1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7b7f546c46 to 2"
I0323 21:28:01.883943 1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2023-03-23 21:28:01.868410453 +0000 UTC m=+204.007402323 - now: 2023-03-23 21:28:01.883933077 +0000 UTC m=+204.022924947]
I0323 21:28:01.884160 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0323 21:28:01.901212 1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7b7f546c46-xvlvr" podUID=359b4716-f4b3-4aac-bb0f-6a1e65a31f68
I0323 21:28:01.901265 1 replica_set.go:380] Pod csi-azurefile-controller-7b7f546c46-xvlvr created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7b7f546c46-xvlvr", GenerateName:"csi-azurefile-controller-7b7f546c46-", Namespace:"kube-system", SelfLink:"", UID:"359b4716-f4b3-4aac-bb0f-6a1e65a31f68", ResourceVersion:"1702", Generation:0, CreationTimestamp:time.Date(2023, time.March, 23, 21, 28, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7b7f546c46"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7b7f546c46", UID:"980f9127-932e-4560-a936-744bba7359b0", Controller:(*bool)(0xc0030a481e), BlockOwnerDeletion:(*bool)(0xc0030a481f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 23, 21, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030c0540), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0030c0558), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0030c0570), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-z6jw5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003315a20), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-z6jw5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-z6jw5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-z6jw5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-z6jw5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-z6jw5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc003315b40)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-z6jw5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002822f00), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0030a4d60), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001a52d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030a4dd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030a4df0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0030a4df8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0030a4dfc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0023afd20), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0323 21:28:01.901667 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0ff4ef873c80a54, ext:204007739830, loc:(*time.Location)(0x72c0b80)}}
I0323 21:28:01.901699 1 disruption.go:415] addPod called on pod "csi-azurefile-controller-7b7f546c46-xvlvr"
I0323 21:28:01.901733 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-xvlvr, PodDisruptionBudget controller will avoid syncing.
I0323 21:28:01.901739 1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-xvlvr"
I0323 21:28:01.901764 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-xvlvr"
I0323 21:28:01.901871 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="46.622771ms"
I0323 21:28:01.901893 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:28:01.901921 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2023-03-23 21:28:01.901908951 +0000 UTC m=+204.040900821"
I0323 21:28:01.901983 1 controller_utils.go:581] Controller csi-azurefile-controller-7b7f546c46 created pod csi-azurefile-controller-7b7f546c46-xvlvr
I0323 21:28:01.902389 1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7b7f546c46" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7b7f546c46-xvlvr"
I0323 21:28:01.902613 1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2023-03-23 21:28:01 +0000 UTC - now: 2023-03-23 21:28:01.902609566 +0000 UTC m=+204.041601536]
I0323 21:28:01.924877 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0323 21:28:01.925800 1 replica_set.go:443] Pod csi-azurefile-controller-7b7f546c46-xvlvr updated, objectMeta {Name:csi-azurefile-controller-7b7f546c46-xvlvr GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:359b4716-f4b3-4aac-bb0f-6a1e65a31f68 ResourceVersion:1702 Generation:0 CreationTimestamp:2023-03-23 21:28:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:980f9127-932e-4560-a936-744bba7359b0 Controller:0xc0030a481e BlockOwnerDeletion:0xc0030a481f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:28:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"980f9127-932e-4560-a936-744bba7359b0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-7b7f546c46-xvlvr GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:359b4716-f4b3-4aac-bb0f-6a1e65a31f68 ResourceVersion:1704 Generation:0 CreationTimestamp:2023-03-23 21:28:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:980f9127-932e-4560-a936-744bba7359b0 Controller:0xc002c9eb9e BlockOwnerDeletion:0xc002c9eb9f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:28:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"980f9127-932e-4560-a936-744bba7359b0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
... skipping 11 lines ...
I0323 21:28:01.928386 1 progress.go:195] Queueing up deployment "csi-azurefile-controller" for a progress check after 599s
I0323 21:28:01.928500 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="1.850338ms"
I0323 21:28:01.928336 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-94sxj"
I0323 21:28:01.928352 1 disruption.go:415] addPod called on pod "csi-azurefile-controller-7b7f546c46-94sxj"
I0323 21:28:01.928526 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-94sxj, PodDisruptionBudget controller will avoid syncing.
I0323 21:28:01.928530 1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-94sxj"
I0323 21:28:01.928029 1 replica_set.go:380] Pod csi-azurefile-controller-7b7f546c46-94sxj created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7b7f546c46-94sxj", GenerateName:"csi-azurefile-controller-7b7f546c46-", Namespace:"kube-system", SelfLink:"", UID:"fe56270c-f977-45c8-b44a-fa57c7b83354", ResourceVersion:"1705", Generation:0, CreationTimestamp:time.Date(2023, time.March, 23, 21, 28, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7b7f546c46"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7b7f546c46", UID:"980f9127-932e-4560-a936-744bba7359b0", Controller:(*bool)(0xc00265e867), BlockOwnerDeletion:(*bool)(0xc00265e868)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 23, 21, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b45398), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc002b453b0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002b453c8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-sng7j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001e46860), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-sng7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system", "--kube-api-qps=50", "--kube-api-burst=100"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-sng7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-sng7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-sng7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-sng7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001e46980)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-sng7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0016b4740), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00265ec10), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000527730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00265ec80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00265eca0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc00265eca8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00265ecac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002925ee0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0323 21:28:01.928741 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7b7f546c46", timestamp:time.Time{wall:0xc0ff4ef873c80a54, ext:204007739830, loc:(*time.Location)(0x72c0b80)}}
I0323 21:28:01.942582 1 disruption.go:427] updatePod called on pod "csi-azurefile-controller-7b7f546c46-94sxj"
I0323 21:28:01.942607 1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-7b7f546c46-94sxj, PodDisruptionBudget controller will avoid syncing.
I0323 21:28:01.942614 1 disruption.go:430] No matching pdb for pod "csi-azurefile-controller-7b7f546c46-94sxj"
I0323 21:28:01.942736 1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7b7f546c46-94sxj"
I0323 21:28:01.942706 1 replica_set.go:443] Pod csi-azurefile-controller-7b7f546c46-94sxj updated, objectMeta {Name:csi-azurefile-controller-7b7f546c46-94sxj GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:fe56270c-f977-45c8-b44a-fa57c7b83354 ResourceVersion:1705 Generation:0 CreationTimestamp:2023-03-23 21:28:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:980f9127-932e-4560-a936-744bba7359b0 Controller:0xc00265e867 BlockOwnerDeletion:0xc00265e868}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:28:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"980f9127-932e-4560-a936-744bba7359b0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-7b7f546c46-94sxj GenerateName:csi-azurefile-controller-7b7f546c46- Namespace:kube-system SelfLink: UID:fe56270c-f977-45c8-b44a-fa57c7b83354 ResourceVersion:1707 Generation:0 CreationTimestamp:2023-03-23 21:28:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:7b7f546c46] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-7b7f546c46 UID:980f9127-932e-4560-a936-744bba7359b0 Controller:0xc0026a62b7 BlockOwnerDeletion:0xc0026a62b8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2023-03-23 21:28:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"980f9127-932e-4560-a936-744bba7359b0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
... skipping 165 lines ...
I0323 21:28:06.703392 1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/csi-snapshot-controller-5b8fcdb667", timestamp:time.Time{wall:0xc0ff4ef9a9ece01b, ext:208842381693, loc:(*time.Location)(0x72c0b80)}}
I0323 21:28:06.703422 1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-snapshot-controller-5b8fcdb667" need=2 creating=2
I0323 21:28:06.703889 1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-snapshot-controller-5b8fcdb667 to 2"
I0323 21:28:06.712356 1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
I0323 21:28:06.712502 1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2023-03-23 21:28:06.703339222 +0000 UTC m=+208.842331392 - now: 2023-03-23 21:28:06.712496689 +0000 UTC m=+208.851488559]
I0323 21:28:06.722953 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="32.851698ms"
I0323 21:28:06.722976 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0323 21:28:06.723001 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2023-03-23 21:28:06.72298958 +0000 UTC m=+208.861981450"
I0323 21:28:06.723309 1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2023-03-23 21:28:06 +0000 UTC - now: 2023-03-23 21:28:06.723304686 +0000 UTC m=+208.862296556]
I0323 21:28:06.723795 1 controller_utils.go:581] Controller csi-snapshot-controller-5b8fcdb667 created pod csi-snapshot-controller-5b8fcdb667-kxmfw
I0323 21:28:06.724087 1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-5b8fcdb667" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-5b8fcdb667-kxmfw"
I0323 21:28:06.724352 1 replica_set.go:380] Pod csi-snapshot-controller-5b8fcdb667-kxmfw created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-snapshot-controller-5b8fcdb667-kxmfw", GenerateName:"csi-snapshot-controller-5b8fcdb667-", Namespace:"kube-system", SelfLink:"", UID:"522c9ad6-5434-459f-89e9-c450c3f10129", ResourceVersion:"1800", Generation:0, CreationTimestamp:time.Date(2023, time.March, 23, 21, 28, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-snapshot-controller", "pod-template-hash":"5b8fcdb667"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-snapshot-controller-5b8fcdb667", UID:"733cc21d-124c-4bd8-ab40-4ff39ab68efc", Controller:(*bool)(0xc0024f4ba7), BlockOwnerDeletion:(*bool)(0xc0024f4ba8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 23, 21, 28, 6, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001dbeaf8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-8z2pk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001c23380), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-snapshot-controller", Image:"mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1", Command:[]string(nil), Args:[]string{"--v=2", "--leader-election=true", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-8z2pk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024f4c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-snapshot-controller-sa", DeprecatedServiceAccount:"csi-snapshot-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002176c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024f4cd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024f4cf0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0024f4cf8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024f4cfc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002969340), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0323 21:28:06.724533 1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-snapshot-controller-5b8fcdb667", timestamp:time.Time{wall:0xc0ff4ef9a9ece01b, ext:208842381693, loc:(*time.Location)(0x72c0b80)}}
... skipping 314 lines ...
I0323 21:29:57.732518 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:29:57.732607 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:29:57.732681 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:29:57.732757 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:29:57.732827 1 pv_controller.go:1775] operation "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" is already running, skipping
I0323 21:29:57.732912 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-3154/pvc-5jc6f"
I0323 21:29:57.734079 1 azure_provision.go:108] failed to get azure provider
I0323 21:29:57.734105 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:29:57.734166 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:29:58.2341516 +0000 UTC m=+320.373143470 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:29:57.734292 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:29:57.950175 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:30:02.212988 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8508
I0323 21:30:02.298872 1 tokens_controller.go:252] syncServiceAccount(azurefile-8508/default), service account deleted, removing tokens
I0323 21:30:02.299081 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8508" (2.2µs)
I0323 21:30:02.299236 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8508, name default, uid c923e80f-3b96-4b12-9e9a-8330a529c9e2, event type delete
I0323 21:30:02.304397 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8508, name default-token-fjjlj, uid aa33da51-bd30-4b88-9a71-1193c7d1b4e5, event type delete
... skipping 15 lines ...
I0323 21:30:12.681469 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:30:12.681502 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:30:12.681553 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:30:12.681583 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:30:12.681636 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:30:12.681649 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:30:12.691525 1 azure_provision.go:108] failed to get azure provider
I0323 21:30:12.691547 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:30:12.691709 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:30:13.691565805 +0000 UTC m=+335.830557775 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:30:12.691820 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:30:15.484814 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="93.902µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:37062" resp=200
I0323 21:30:15.813722 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:30:16.922355 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0323 21:30:17.430985 1 gc_controller.go:161] GC'ing orphaned
I0323 21:30:17.431021 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:30:19.375155 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
... skipping 5 lines ...
I0323 21:30:27.682551 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:30:27.682616 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:30:27.682634 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:30:27.682653 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:30:27.682667 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:30:27.682674 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:30:27.690071 1 azure_provision.go:108] failed to get azure provider
I0323 21:30:27.690097 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:30:27.690123 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:30:29.690111659 +0000 UTC m=+351.829103629 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:30:27.690362 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:30:27.962821 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:30:29.014630 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 23 items received
I0323 21:30:35.485306 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="99.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46222" resp=200
I0323 21:30:37.431768 1 gc_controller.go:161] GC'ing orphaned
I0323 21:30:37.431802 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:30:42.238111 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
... skipping 3 lines ...
I0323 21:30:42.683128 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:30:42.683252 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:30:42.683302 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:30:42.683346 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:30:42.683453 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:30:42.683501 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:30:42.687099 1 azure_provision.go:108] failed to get azure provider
I0323 21:30:42.687125 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:30:42.687297 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:30:46.687281762 +0000 UTC m=+368.826273732 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:30:42.687407 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:30:45.484815 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="94.201µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52406" resp=200
I0323 21:30:53.348037 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0323 21:30:55.484366 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="87.101µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51274" resp=200
I0323 21:30:56.359765 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 59 items received
I0323 21:30:57.349001 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:30:57.432575 1 gc_controller.go:161] GC'ing orphaned
... skipping 3 lines ...
I0323 21:30:57.684332 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:30:57.684417 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:30:57.684431 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:30:57.684443 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:30:57.684534 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:30:57.684547 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:30:57.689043 1 azure_provision.go:108] failed to get azure provider
I0323 21:30:57.689064 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:30:57.689105 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:31:05.689092688 +0000 UTC m=+387.828084558 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:30:57.689312 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:30:57.985334 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:30:58.374462 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 13 items received
I0323 21:31:05.370327 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 9 items received
I0323 21:31:05.484854 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.801µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:36660" resp=200
I0323 21:31:12.349240 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:31:12.684478 1 pv_controller_base.go:556] resyncing PV controller
I0323 21:31:12.684584 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-3154/pvc-5jc6f" with version 2290
I0323 21:31:12.684740 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:31:12.684805 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:31:12.684818 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:31:12.684830 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:31:12.684887 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:31:12.684901 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:31:12.692986 1 azure_provision.go:108] failed to get azure provider
I0323 21:31:12.693011 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:31:12.693166 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:31:28.693150432 +0000 UTC m=+410.832142402 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:31:12.693441 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:31:13.379374 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 71 items received
I0323 21:31:14.987536 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.PriorityLevelConfiguration total 0 items received
I0323 21:31:15.483822 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="96.701µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49552" resp=200
I0323 21:31:17.433734 1 gc_controller.go:161] GC'ing orphaned
I0323 21:31:17.433765 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:31:19.378141 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 12 items received
... skipping 24 lines ...
I0323 21:31:42.686342 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:31:42.686367 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:31:42.686372 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:31:42.686410 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:31:42.686429 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:31:42.686435 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:31:42.692343 1 azure_provision.go:108] failed to get azure provider
I0323 21:31:42.692371 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:31:42.692455 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:32:14.69242281 +0000 UTC m=+456.831414680 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:31:42.692591 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:31:45.484858 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="105.301µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:35614" resp=200
I0323 21:31:46.348985 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 11 items received
I0323 21:31:48.370763 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0323 21:31:49.476055 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 1 items received
I0323 21:31:49.891458 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0323 21:31:50.376396 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 2 items received
... skipping 32 lines ...
I0323 21:32:27.687412 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:32:27.687454 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:32:27.687486 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:32:27.687517 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:32:27.687537 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:32:27.687544 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:32:27.691376 1 azure_provision.go:108] failed to get azure provider
I0323 21:32:27.691406 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:32:27.691548 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:33:31.691429608 +0000 UTC m=+533.830421478 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:32:27.691717 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:32:28.045668 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:32:35.484367 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="88.1µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:48714" resp=200
I0323 21:32:37.435636 1 gc_controller.go:161] GC'ing orphaned
I0323 21:32:37.435672 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:32:40.378398 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 195 items received
I0323 21:32:42.352620 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 66 lines ...
I0323 21:33:42.690690 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:33:42.690724 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-3154/pvc-5jc6f]: no volume found
I0323 21:33:42.690736 1 pv_controller.go:1455] provisionClaim[azurefile-3154/pvc-5jc6f]: started
I0323 21:33:42.690748 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]]
I0323 21:33:42.690791 1 pv_controller.go:1496] provisionClaimOperation [azurefile-3154/pvc-5jc6f] started, class: "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66"
I0323 21:33:42.690803 1 pv_controller.go:1511] provisionClaimOperation [azurefile-3154/pvc-5jc6f]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:33:42.695299 1 azure_provision.go:108] failed to get azure provider
I0323 21:33:42.695330 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-3154/pvc-5jc6f" with StorageClass "azurefile-3154-kubernetes.io-azure-file-dynamic-sc-9nb66": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:33:42.695393 1 goroutinemap.go:150] Operation for "provision-azurefile-3154/pvc-5jc6f[082f0559-de08-4f65-9b1e-5ede343ee839]" failed. No retries permitted until 2023-03-23 21:35:44.695380264 +0000 UTC m=+666.834372134 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:33:42.695452 1 event.go:294] "Event occurred" object="azurefile-3154/pvc-5jc6f" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:33:43.383130 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 84 items received
I0323 21:33:43.607010 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:33:45.485024 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="91.5µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53322" resp=200
I0323 21:33:46.521078 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:33:47.903968 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 11 items received
I0323 21:33:48.371761 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 6 items received
... skipping 108 lines ...
I0323 21:35:00.512786 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-342/pvc-mqvvj" with version 3355
I0323 21:35:00.512974 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:35:00.513017 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:35:00.513023 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:35:00.513033 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:35:00.513037 1 pv_controller.go:1775] operation "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" is already running, skipping
I0323 21:35:00.514565 1 azure_provision.go:108] failed to get azure provider
I0323 21:35:00.514583 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:35:00.514652 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:35:01.014636072 +0000 UTC m=+623.153628042 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:35:00.514674 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:35:01.000614 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-system/tigera-operator" startTime="2023-03-23 21:35:01.000481798 +0000 UTC m=+623.139473768"
I0323 21:35:01.001423 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-system/tigera-operator" duration="927.006µs"
I0323 21:35:01.439025 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:35:03.000541 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/cloud-controller-manager" startTime="2023-03-23 21:35:03.000482534 +0000 UTC m=+625.139474504"
I0323 21:35:03.001160 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/cloud-controller-manager" duration="662.704µs"
I0323 21:35:03.707205 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3154
... skipping 23 lines ...
I0323 21:35:03.873367 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3154" (2.4µs)
I0323 21:35:03.873846 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-3154, estimate: 15, errors: <nil>
I0323 21:35:03.873864 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3154" (173.425444ms)
I0323 21:35:03.873871 1 namespace_controller.go:157] Content remaining in namespace azurefile-3154, waiting 8 seconds
I0323 21:35:04.289123 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-2738
I0323 21:35:04.312112 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-2738, name default-token-btdl7, uid cebcdeb9-1eea-4bc1-9523-4a8ec652abb8, event type delete
E0323 21:35:04.351278 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-2738/default: secrets "default-token-r7dfh" is forbidden: unable to create new content in namespace azurefile-2738 because it is being terminated
I0323 21:35:04.444178 1 tokens_controller.go:252] syncServiceAccount(azurefile-2738/default), service account deleted, removing tokens
I0323 21:35:04.444253 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2738" (1.6µs)
I0323 21:35:04.444296 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-2738, name default, uid f08ffd12-7322-4113-ad96-01c47dab5ca7, event type delete
I0323 21:35:04.457702 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-2738, name kube-root-ca.crt, uid 525979fd-ffe8-4571-8746-4303c0a331b7, event type delete
I0323 21:35:04.459826 1 publisher.go:186] Finished syncing namespace "azurefile-2738" (2.186715ms)
I0323 21:35:04.492457 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2738" (2.7µs)
I0323 21:35:04.493037 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-2738, estimate: 0, errors: <nil>
I0323 21:35:04.503024 1 namespace_controller.go:180] Finished syncing namespace "azurefile-2738" (216.789827ms)
I0323 21:35:04.890111 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-7427
I0323 21:35:04.939348 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-7427, name default-token-7t9qv, uid 1b878ac1-ffd2-4999-8aca-db36916a1407, event type delete
E0323 21:35:04.951276 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-7427/default: secrets "default-token-89qrb" is forbidden: unable to create new content in namespace azurefile-7427 because it is being terminated
I0323 21:35:05.009721 1 tokens_controller.go:252] syncServiceAccount(azurefile-7427/default), service account deleted, removing tokens
I0323 21:35:05.009890 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7427" (2.8µs)
I0323 21:35:05.009983 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-7427, name default, uid f2c41f26-519b-4f8a-8ac8-d56b622ff1d1, event type delete
I0323 21:35:05.020147 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-7427, name kube-root-ca.crt, uid 37abb382-c6d8-4843-a8d5-d14ac2cc4791, event type delete
I0323 21:35:05.021863 1 publisher.go:186] Finished syncing namespace "azurefile-7427" (1.866212ms)
I0323 21:35:05.031609 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7427" (1.3µs)
... skipping 19 lines ...
I0323 21:35:12.694555 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:35:12.694636 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:35:12.694670 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:35:12.694739 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:35:12.694805 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:35:12.694838 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:35:12.701007 1 azure_provision.go:108] failed to get azure provider
I0323 21:35:12.701033 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:35:12.701183 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:35:13.701167757 +0000 UTC m=+635.840159727 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:35:12.701285 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:35:13.987337 1 namespace_controller.go:185] Namespace has been deleted azurefile-3154
I0323 21:35:13.987427 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3154" (114.8µs)
I0323 21:35:15.000610 1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2023-03-23 21:35:15.000574338 +0000 UTC m=+637.139566208"
I0323 21:35:15.001064 1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="474.303µs"
I0323 21:35:15.484203 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:58408" resp=200
I0323 21:35:16.439469 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 6 lines ...
I0323 21:35:27.695311 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:35:27.695349 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:35:27.695356 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:35:27.695388 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:35:27.695432 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:35:27.695444 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:35:27.702335 1 azure_provision.go:108] failed to get azure provider
I0323 21:35:27.702368 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:35:27.702421 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:35:29.702407317 +0000 UTC m=+651.841399187 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:35:27.702568 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:35:28.158330 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:35:28.375253 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0323 21:35:28.413494 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:35:35.484366 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="92.801µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60868" resp=200
I0323 21:35:36.435349 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:35:37.442747 1 gc_controller.go:161] GC'ing orphaned
... skipping 4 lines ...
I0323 21:35:42.695391 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:35:42.695487 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:35:42.695527 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:35:42.695545 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:35:42.695570 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:35:42.695644 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:35:42.700582 1 azure_provision.go:108] failed to get azure provider
I0323 21:35:42.700605 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:35:42.700756 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:35:46.700739798 +0000 UTC m=+668.839731768 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:35:42.700794 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:35:45.484312 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="103.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56230" resp=200
I0323 21:35:55.484773 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="117.5µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41722" resp=200
I0323 21:35:57.363549 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:35:57.443777 1 gc_controller.go:161] GC'ing orphaned
I0323 21:35:57.443879 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:35:57.695660 1 pv_controller_base.go:556] resyncing PV controller
I0323 21:35:57.695729 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-342/pvc-mqvvj" with version 3355
I0323 21:35:57.695746 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:35:57.695773 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:35:57.695786 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:35:57.695798 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:35:57.695818 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:35:57.695830 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:35:57.710486 1 azure_provision.go:108] failed to get azure provider
I0323 21:35:57.710509 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:35:57.710676 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:36:05.710661043 +0000 UTC m=+687.849652913 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:35:57.710785 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:35:58.172524 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:36:03.439556 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:36:05.484568 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="98.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52750" resp=200
I0323 21:36:08.000532 1 deployment_controller.go:576] "Started syncing deployment" deployment="calico-apiserver/calico-apiserver" startTime="2023-03-23 21:36:08.000493463 +0000 UTC m=+690.139485433"
I0323 21:36:08.001188 1 deployment_controller.go:578] "Finished syncing deployment" deployment="calico-apiserver/calico-apiserver" duration="678.803µs"
I0323 21:36:10.509358 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 3 lines ...
I0323 21:36:12.696800 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:36:12.696855 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:36:12.696876 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:36:12.696906 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:36:12.696949 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:36:12.696983 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:36:12.701436 1 azure_provision.go:108] failed to get azure provider
I0323 21:36:12.701464 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:36:12.701624 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:36:28.701608471 +0000 UTC m=+710.840600341 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:36:12.701723 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:36:13.425851 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 40 items received
I0323 21:36:15.484560 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="104.001µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56972" resp=200
I0323 21:36:17.443953 1 gc_controller.go:161] GC'ing orphaned
I0323 21:36:17.443984 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:36:19.380250 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
I0323 21:36:25.484501 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="100.601µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44146" resp=200
... skipping 15 lines ...
I0323 21:36:42.698018 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:36:42.698045 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:36:42.698056 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:36:42.698068 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:36:42.698102 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:36:42.698110 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:36:42.703744 1 azure_provision.go:108] failed to get azure provider
I0323 21:36:42.703768 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:36:42.703812 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:37:14.703800747 +0000 UTC m=+756.842792617 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:36:42.703883 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:36:45.483805 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="103.5µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43156" resp=200
I0323 21:36:52.379598 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 0 items received
I0323 21:36:55.484811 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.5µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54316" resp=200
I0323 21:36:57.365798 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:36:57.445085 1 gc_controller.go:161] GC'ing orphaned
I0323 21:36:57.445120 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 28 lines ...
I0323 21:37:27.699293 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:37:27.699314 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:37:27.699327 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:37:27.699337 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:37:27.699355 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:37:27.699366 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:37:27.706724 1 azure_provision.go:108] failed to get azure provider
I0323 21:37:27.706749 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:37:27.706921 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:38:31.706905122 +0000 UTC m=+833.845896992 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:37:27.707034 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:37:28.233590 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:37:35.485558 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="102.001µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52876" resp=200
I0323 21:37:37.446119 1 gc_controller.go:161] GC'ing orphaned
I0323 21:37:37.446151 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:37:38.382648 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 9 items received
I0323 21:37:42.368956 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 67 lines ...
I0323 21:38:42.702750 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:38:42.702901 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-342/pvc-mqvvj]: no volume found
I0323 21:38:42.702998 1 pv_controller.go:1455] provisionClaim[azurefile-342/pvc-mqvvj]: started
I0323 21:38:42.703117 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]]
I0323 21:38:42.703202 1 pv_controller.go:1496] provisionClaimOperation [azurefile-342/pvc-mqvvj] started, class: "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn"
I0323 21:38:42.703232 1 pv_controller.go:1511] provisionClaimOperation [azurefile-342/pvc-mqvvj]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:38:42.707845 1 azure_provision.go:108] failed to get azure provider
I0323 21:38:42.707870 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-342/pvc-mqvvj" with StorageClass "azurefile-342-kubernetes.io-azure-file-dynamic-sc-6n4hn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:38:42.708003 1 goroutinemap.go:150] Operation for "provision-azurefile-342/pvc-mqvvj[0e37d8e4-78d3-4d3d-bf08-6331924947ed]" failed. No retries permitted until 2023-03-23 21:40:44.707988574 +0000 UTC m=+966.846980544 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:38:42.708068 1 event.go:294] "Event occurred" object="azurefile-342/pvc-mqvvj" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:38:45.484535 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="89.501µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:39800" resp=200
I0323 21:38:47.286706 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-control-plane-78g6l"
I0323 21:38:47.551075 1 node_lifecycle_controller.go:1046] Node capz-7bsgqo-control-plane-78g6l ReadyCondition updated. Updating timestamp.
I0323 21:38:48.787102 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 0 items received
I0323 21:38:55.484269 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="95.4µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:47540" resp=200
I0323 21:38:57.372407 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 94 lines ...
I0323 21:40:02.291685 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:40:02.291738 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:40:02.291764 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:40:02.291839 1 pv_controller.go:1775] operation "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" is already running, skipping
I0323 21:40:02.291946 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-6538/pvc-jkm4r"
I0323 21:40:02.292191 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6538/pvc-jkm4r" with version 4420
I0323 21:40:02.293670 1 azure_provision.go:108] failed to get azure provider
I0323 21:40:02.293689 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:40:02.293799 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:40:02.793702811 +0000 UTC m=+924.932694781 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:40:02.293987 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:40:04.379226 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0323 21:40:04.480367 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 0 items received
I0323 21:40:05.484553 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="95.2µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43952" resp=200
I0323 21:40:06.573787 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-342
I0323 21:40:06.601980 1 tokens_controller.go:252] syncServiceAccount(azurefile-342/default), service account deleted, removing tokens
I0323 21:40:06.602196 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-342" (2.2µs)
... skipping 33 lines ...
I0323 21:40:12.707171 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:40:12.707197 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:40:12.707203 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:40:12.707214 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:40:12.707229 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:40:12.707235 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:40:12.710366 1 azure_provision.go:108] failed to get azure provider
I0323 21:40:12.710389 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:40:12.710417 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:40:13.710405007 +0000 UTC m=+935.849396877 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:40:12.710663 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:40:13.000305 1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-nemi3t" (20.8µs)
I0323 21:40:13.000347 1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-j66lg8" (4.2µs)
I0323 21:40:15.484243 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:36048" resp=200
I0323 21:40:16.841216 1 namespace_controller.go:185] Namespace has been deleted azurefile-342
I0323 21:40:16.841414 1 namespace_controller.go:180] Finished syncing namespace "azurefile-342" (223.701µs)
I0323 21:40:17.450947 1 gc_controller.go:161] GC'ing orphaned
... skipping 9 lines ...
I0323 21:40:27.708120 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:40:27.708149 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:40:27.708159 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:40:27.708170 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:40:27.708187 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:40:27.708198 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:40:27.710289 1 azure_provision.go:108] failed to get azure provider
I0323 21:40:27.710499 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:40:27.710631 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:40:29.710598671 +0000 UTC m=+951.849590641 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:40:27.710983 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:40:28.356985 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:40:35.484923 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="109.301µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:32938" resp=200
I0323 21:40:35.622029 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:40:37.451622 1 gc_controller.go:161] GC'ing orphaned
I0323 21:40:37.451654 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:40:40.930905 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.FlowSchema total 0 items received
... skipping 4 lines ...
I0323 21:40:42.709017 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:40:42.709045 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:40:42.709052 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:40:42.709061 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:40:42.709080 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:40:42.709086 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:40:42.713538 1 azure_provision.go:108] failed to get azure provider
I0323 21:40:42.713564 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:40:42.713600 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:40:46.713586085 +0000 UTC m=+968.852578055 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:40:42.713861 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:40:43.606607 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 6 items received
I0323 21:40:45.484506 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="113.308µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42500" resp=200
I0323 21:40:48.377277 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 0 items received
I0323 21:40:48.455152 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:40:48.459095 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 0 items received
I0323 21:40:49.030782 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
... skipping 7 lines ...
I0323 21:40:57.710033 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:40:57.710116 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:40:57.710127 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:40:57.710188 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:40:57.710264 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:40:57.710275 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:40:57.715624 1 azure_provision.go:108] failed to get azure provider
I0323 21:40:57.715709 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:40:57.715787 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:41:05.715746774 +0000 UTC m=+987.854738744 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:40:57.715865 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:40:58.375450 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:41:02.386195 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 16 items received
I0323 21:41:04.255254 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0323 21:41:05.484202 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="113.203µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:37238" resp=200
I0323 21:41:06.992911 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.PriorityLevelConfiguration total 0 items received
I0323 21:41:12.376602 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 2 lines ...
I0323 21:41:12.710940 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:41:12.710966 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:41:12.710971 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:41:12.710983 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:41:12.711006 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:41:12.711013 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:41:12.723235 1 azure_provision.go:108] failed to get azure provider
I0323 21:41:12.723257 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:41:12.723289 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:41:28.723276892 +0000 UTC m=+1010.862268862 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:41:12.723559 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:41:15.484617 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.203µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41948" resp=200
I0323 21:41:17.453107 1 gc_controller.go:161] GC'ing orphaned
I0323 21:41:17.453517 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:41:17.778172 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:41:20.806039 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:41:25.484121 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="79.903µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:58858" resp=200
... skipping 20 lines ...
I0323 21:41:42.712408 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:41:42.712656 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:41:42.712696 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:41:42.712762 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:41:42.712812 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:41:42.712824 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:41:42.721573 1 azure_provision.go:108] failed to get azure provider
I0323 21:41:42.721602 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:41:42.721658 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:42:14.721644286 +0000 UTC m=+1056.860636156 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:41:42.721836 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:41:45.484581 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.702µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:38028" resp=200
I0323 21:41:50.778820 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:41:55.484181 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="86.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:33284" resp=200
I0323 21:41:55.625540 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:41:56.241894 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:41:57.378154 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 41 lines ...
I0323 21:42:27.714078 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:42:27.714103 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:42:27.714109 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:42:27.714123 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:42:27.714145 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:42:27.714158 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:42:27.721412 1 azure_provision.go:108] failed to get azure provider
I0323 21:42:27.721436 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:42:27.721561 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:43:31.721547738 +0000 UTC m=+1133.860539708 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:42:27.721683 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:42:28.500416 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:42:35.484375 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="117.504µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45956" resp=200
I0323 21:42:37.454791 1 gc_controller.go:161] GC'ing orphaned
I0323 21:42:37.454825 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:42:42.380069 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:42:42.525723 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 52 lines ...
I0323 21:43:42.717794 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:43:42.717838 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6538/pvc-jkm4r]: no volume found
I0323 21:43:42.717871 1 pv_controller.go:1455] provisionClaim[azurefile-6538/pvc-jkm4r]: started
I0323 21:43:42.717915 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]]
I0323 21:43:42.717936 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6538/pvc-jkm4r] started, class: "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8"
I0323 21:43:42.717947 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6538/pvc-jkm4r]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:43:42.723402 1 azure_provision.go:108] failed to get azure provider
I0323 21:43:42.723423 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6538/pvc-jkm4r" with StorageClass "azurefile-6538-kubernetes.io-azure-file-dynamic-sc-l28c8": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:43:42.723454 1 goroutinemap.go:150] Operation for "provision-azurefile-6538/pvc-jkm4r[66ba176e-7b4e-4748-a8f7-389a4f104904]" failed. No retries permitted until 2023-03-23 21:45:44.723441153 +0000 UTC m=+1266.862433023 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:43:42.723477 1 event.go:294] "Event occurred" object="azurefile-6538/pvc-jkm4r" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:43:43.439595 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:43:44.260421 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:43:45.396573 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 10 items received
I0323 21:43:45.484569 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="82.702µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:39092" resp=200
I0323 21:43:52.736082 1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7bsgqo-control-plane-78g6l"
I0323 21:43:55.485579 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.003µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60886" resp=200
... skipping 90 lines ...
I0323 21:45:03.804244 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:45:03.804262 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:45:03.804267 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:45:03.804373 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:45:03.804384 1 pv_controller.go:1775] operation "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" is already running, skipping
I0323 21:45:03.804400 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-6841/pvc-hmjkr"
I0323 21:45:03.805792 1 azure_provision.go:108] failed to get azure provider
I0323 21:45:03.805811 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:45:03.805902 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:45:04.305826259 +0000 UTC m=+1226.444818129 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:45:03.806125 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:45:05.483714 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="96.102µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:38220" resp=200
I0323 21:45:08.256705 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6538
I0323 21:45:08.300020 1 tokens_controller.go:252] syncServiceAccount(azurefile-6538/default), service account deleted, removing tokens
I0323 21:45:08.300318 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-6538, name default, uid a3167803-4b3f-41a1-adf7-5a3b8d8d68d8, event type delete
I0323 21:45:08.300291 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6538" (2.1µs)
I0323 21:45:08.306091 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-6538, name default-token-f6qq8, uid 83de628e-653f-4dee-b372-a28fd31c6662, event type delete
... skipping 27 lines ...
I0323 21:45:12.722983 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:45:12.723010 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:45:12.723015 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:45:12.723028 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:45:12.723051 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:45:12.723058 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:45:12.733772 1 azure_provision.go:108] failed to get azure provider
I0323 21:45:12.733803 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:45:12.733834 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:45:13.733819682 +0000 UTC m=+1235.872811552 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:45:12.734063 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:45:13.382737 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 8 items received
I0323 21:45:13.411865 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6538
I0323 21:45:13.552566 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-6538, estimate: 0, errors: <nil>
I0323 21:45:13.552611 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6538" (2.5µs)
I0323 21:45:13.566043 1 namespace_controller.go:180] Finished syncing namespace "azurefile-6538" (156.32196ms)
I0323 21:45:15.483988 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="92.101µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42148" resp=200
... skipping 10 lines ...
I0323 21:45:27.723791 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:45:27.723831 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:45:27.723839 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:45:27.723874 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:45:27.723915 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:45:27.723950 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:45:27.733772 1 azure_provision.go:108] failed to get azure provider
I0323 21:45:27.733809 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:45:27.733886 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:45:29.733852809 +0000 UTC m=+1251.872844779 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:45:27.734052 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:45:27.831999 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 8 items received
I0323 21:45:28.637040 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:45:31.376554 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 0 items received
I0323 21:45:32.450977 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:45:33.383826 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 7 items received
I0323 21:45:33.461800 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 8 lines ...
I0323 21:45:42.725049 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:45:42.725123 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:45:42.725143 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:45:42.725166 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:45:42.725208 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:45:42.725226 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:45:42.737592 1 azure_provision.go:108] failed to get azure provider
I0323 21:45:42.737614 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:45:42.737650 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:45:46.737636668 +0000 UTC m=+1268.876628638 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:45:42.737810 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:45:43.390516 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
I0323 21:45:44.391832 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
I0323 21:45:45.384048 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 2 items received
I0323 21:45:45.484008 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="89.602µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:37838" resp=200
I0323 21:45:46.383801 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 0 items received
I0323 21:45:47.382360 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 0 items received
... skipping 7 lines ...
I0323 21:45:57.725490 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:45:57.725642 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:45:57.725656 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:45:57.725669 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:45:57.725749 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:45:57.725761 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:45:57.731625 1 azure_provision.go:108] failed to get azure provider
I0323 21:45:57.731661 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:45:57.731698 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:46:05.73168485 +0000 UTC m=+1287.870676720 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:45:57.731897 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:45:58.656729 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:46:05.484228 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.701µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42420" resp=200
I0323 21:46:12.387551 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:46:12.725410 1 pv_controller_base.go:556] resyncing PV controller
I0323 21:46:12.725807 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-6841/pvc-hmjkr" with version 5473
I0323 21:46:12.725864 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:46:12.726086 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:46:12.726102 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:46:12.726129 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:46:12.726156 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:46:12.726171 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:46:12.741642 1 azure_provision.go:108] failed to get azure provider
I0323 21:46:12.741663 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:46:12.741720 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:46:28.741707461 +0000 UTC m=+1310.880699331 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:46:12.741865 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:46:15.484365 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="99.402µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34916" resp=200
I0323 21:46:17.462501 1 gc_controller.go:161] GC'ing orphaned
I0323 21:46:17.462560 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:46:25.484312 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="117.002µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54430" resp=200
I0323 21:46:27.388130 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:46:27.725908 1 pv_controller_base.go:556] resyncing PV controller
... skipping 15 lines ...
I0323 21:46:42.727408 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:46:42.727456 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:46:42.727469 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:46:42.727489 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:46:42.727521 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:46:42.727528 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:46:42.732903 1 azure_provision.go:108] failed to get azure provider
I0323 21:46:42.732928 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:46:42.732958 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:47:14.732943153 +0000 UTC m=+1356.871935023 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:46:42.732987 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:46:43.608927 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 0 items received
I0323 21:46:45.483975 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="116.001µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46622" resp=200
I0323 21:46:47.793031 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 1 items received
I0323 21:46:50.371286 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 1 items received
I0323 21:46:50.384202 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 19 items received
I0323 21:46:50.791619 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 1 items received
... skipping 41 lines ...
I0323 21:47:27.728457 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:47:27.728514 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:47:27.728527 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:47:27.728550 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:47:27.728603 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:47:27.728618 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:47:27.731877 1 azure_provision.go:108] failed to get azure provider
I0323 21:47:27.731904 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:47:27.732073 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:48:31.73205882 +0000 UTC m=+1433.871050790 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:47:27.732138 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:47:28.710666 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:47:35.358592 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 9 items received
I0323 21:47:35.484687 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="109.202µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:35876" resp=200
I0323 21:47:36.537281 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:47:37.464355 1 gc_controller.go:161] GC'ing orphaned
I0323 21:47:37.464390 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 61 lines ...
I0323 21:48:42.730912 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:48:42.731106 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-6841/pvc-hmjkr]: no volume found
I0323 21:48:42.731123 1 pv_controller.go:1455] provisionClaim[azurefile-6841/pvc-hmjkr]: started
I0323 21:48:42.731159 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]]
I0323 21:48:42.731183 1 pv_controller.go:1496] provisionClaimOperation [azurefile-6841/pvc-hmjkr] started, class: "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn"
I0323 21:48:42.731193 1 pv_controller.go:1511] provisionClaimOperation [azurefile-6841/pvc-hmjkr]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:48:42.735510 1 azure_provision.go:108] failed to get azure provider
I0323 21:48:42.735536 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-6841/pvc-hmjkr" with StorageClass "azurefile-6841-kubernetes.io-azure-file-dynamic-sc-9dfrn": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:48:42.735570 1 goroutinemap.go:150] Operation for "provision-azurefile-6841/pvc-hmjkr[8dfb7368-c1c5-42cf-8e51-c69d81d2953a]" failed. No retries permitted until 2023-03-23 21:50:44.735556579 +0000 UTC m=+1566.874548449 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:48:42.735725 1 event.go:294] "Event occurred" object="azurefile-6841/pvc-hmjkr" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:48:44.394891 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 11 items received
I0323 21:48:45.484810 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="112.099µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:59168" resp=200
I0323 21:48:46.257639 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0323 21:48:55.484018 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="92.801µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43706" resp=200
I0323 21:48:57.394368 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:48:57.466693 1 gc_controller.go:161] GC'ing orphaned
... skipping 89 lines ...
I0323 21:50:05.316968 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:50:05.316983 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:50:05.316987 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:50:05.316994 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:50:05.317605 1 pv_controller.go:1775] operation "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" is already running, skipping
I0323 21:50:05.317648 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-5280/pvc-6nt2z"
I0323 21:50:05.318394 1 azure_provision.go:108] failed to get azure provider
I0323 21:50:05.318424 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:50:05.318543 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:50:05.81844008 +0000 UTC m=+1527.957431950 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:50:05.318782 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:50:05.484119 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="102.301µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54666" resp=200
I0323 21:50:09.457616 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0323 21:50:09.748995 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6841
I0323 21:50:09.811645 1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-6841, name pvc-hmjkr.174f2a7d5448c84f, uid a7071703-c13a-4b6f-8174-82e9fe5f0009, event type delete
I0323 21:50:09.830344 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-6841, name default-token-6zwrs, uid db473726-9b1f-4f2d-95ed-86cad671ff63, event type delete
E0323 21:50:09.841140 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-6841/default: secrets "default-token-9cv5s" is forbidden: unable to create new content in namespace azurefile-6841 because it is being terminated
I0323 21:50:09.884251 1 tokens_controller.go:252] syncServiceAccount(azurefile-6841/default), service account deleted, removing tokens
I0323 21:50:09.884286 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6841" (1.6µs)
I0323 21:50:09.884316 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-6841, name default, uid 49ad4a51-0c1c-41da-9473-0708d765c344, event type delete
I0323 21:50:09.911576 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-6841/pvc-hmjkr"
I0323 21:50:09.911654 1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-6841/pvc-hmjkr"
I0323 21:50:09.911670 1 pvc_protection_controller.go:230] "Looking for Pods using PVC in the Informer's cache" PVC="azurefile-6841/pvc-hmjkr"
... skipping 22 lines ...
I0323 21:50:12.734459 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:50:12.734491 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:50:12.734503 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:50:12.734513 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:50:12.734535 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:50:12.734545 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:50:12.741537 1 azure_provision.go:108] failed to get azure provider
I0323 21:50:12.741562 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:50:12.741618 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:50:13.741603765 +0000 UTC m=+1535.880595635 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:50:12.741909 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:50:14.956833 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6841
I0323 21:50:15.070459 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6841" (3.9µs)
I0323 21:50:15.070695 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-6841, estimate: 0, errors: <nil>
I0323 21:50:15.082129 1 namespace_controller.go:180] Finished syncing namespace "azurefile-6841" (133.019611ms)
I0323 21:50:15.460120 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:50:15.483925 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="93.301µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:33404" resp=200
... skipping 8 lines ...
I0323 21:50:27.735000 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:50:27.735035 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:50:27.735047 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:50:27.735059 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:50:27.735088 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:50:27.735100 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:50:27.740666 1 azure_provision.go:108] failed to get azure provider
I0323 21:50:27.740689 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:50:27.740723 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:50:29.740711971 +0000 UTC m=+1551.879703941 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:50:27.740810 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:50:28.787525 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0323 21:50:28.826889 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:50:35.484468 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="88.701µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:35152" resp=200
I0323 21:50:37.470272 1 gc_controller.go:161] GC'ing orphaned
I0323 21:50:37.470304 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:50:42.264473 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 5 items received
... skipping 3 lines ...
I0323 21:50:42.735910 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:50:42.735942 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:50:42.735951 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:50:42.735965 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:50:42.735980 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:50:42.736030 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:50:42.739558 1 azure_provision.go:108] failed to get azure provider
I0323 21:50:42.739581 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:50:42.739639 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:50:46.739625628 +0000 UTC m=+1568.878617598 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:50:42.739948 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:50:43.388267 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 6 items received
I0323 21:50:43.393578 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 5 items received
I0323 21:50:45.483943 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="108.601µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51124" resp=200
I0323 21:50:47.379059 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 10 items received
I0323 21:50:47.539899 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0323 21:50:49.460798 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 7 lines ...
I0323 21:50:57.736030 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:50:57.736185 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:50:57.736269 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:50:57.736413 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:50:57.736447 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:50:57.736512 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:50:57.741490 1 azure_provision.go:108] failed to get azure provider
I0323 21:50:57.741515 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:50:57.741659 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:51:05.741623762 +0000 UTC m=+1587.880615732 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:50:57.741771 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:50:58.846707 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:51:05.484269 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="97.401µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:51052" resp=200
I0323 21:51:12.402434 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0323 21:51:12.737216 1 pv_controller_base.go:556] resyncing PV controller
I0323 21:51:12.737411 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-5280/pvc-6nt2z" with version 6525
I0323 21:51:12.737439 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:51:12.737467 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:51:12.737533 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:51:12.737554 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:51:12.737621 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:51:12.737634 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:51:12.752153 1 azure_provision.go:108] failed to get azure provider
I0323 21:51:12.752176 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:51:12.752213 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:51:28.752198707 +0000 UTC m=+1610.891190677 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:51:12.752418 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:51:15.484157 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="107.401µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56110" resp=200
I0323 21:51:17.470984 1 gc_controller.go:161] GC'ing orphaned
I0323 21:51:17.471016 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:51:24.382295 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 7 items received
I0323 21:51:25.484390 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="77.301µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42980" resp=200
I0323 21:51:26.405230 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 12 items received
... skipping 16 lines ...
I0323 21:51:42.739107 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:51:42.739179 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:51:42.739192 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:51:42.739219 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:51:42.739250 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:51:42.739261 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:51:42.744428 1 azure_provision.go:108] failed to get azure provider
I0323 21:51:42.744452 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:51:42.744513 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:52:14.744498291 +0000 UTC m=+1656.883490261 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:51:42.744583 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:51:45.483858 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="100.1µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:46656" resp=200
I0323 21:51:47.401669 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 15 items received
I0323 21:51:48.396665 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 7 items received
I0323 21:51:49.433177 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:51:55.484846 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="89.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:49256" resp=200
I0323 21:51:57.407362 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 32 lines ...
I0323 21:52:27.740584 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:52:27.740712 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:52:27.740768 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:52:27.740786 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:52:27.740807 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:52:27.740814 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:52:27.747257 1 azure_provision.go:108] failed to get azure provider
I0323 21:52:27.747282 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:52:27.747429 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:53:31.74730197 +0000 UTC m=+1733.886293840 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:52:27.747818 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:52:28.908463 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:52:30.385264 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0323 21:52:31.366662 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0323 21:52:35.485017 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="100.401µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:48260" resp=200
I0323 21:52:37.473507 1 gc_controller.go:161] GC'ing orphaned
I0323 21:52:37.473654 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 61 lines ...
I0323 21:53:42.743753 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:53:42.743781 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:53:42.743793 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
I0323 21:53:42.743805 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]]
I0323 21:53:42.743825 1 pv_controller.go:1496] provisionClaimOperation [azurefile-5280/pvc-6nt2z] started, class: "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g"
I0323 21:53:42.743836 1 pv_controller.go:1511] provisionClaimOperation [azurefile-5280/pvc-6nt2z]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:53:42.752831 1 azure_provision.go:108] failed to get azure provider
I0323 21:53:42.752860 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-5280/pvc-6nt2z" with StorageClass "azurefile-5280-kubernetes.io-azure-file-dynamic-sc-9vm5g": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:53:42.753050 1 goroutinemap.go:150] Operation for "provision-azurefile-5280/pvc-6nt2z[b1d5107f-e21d-4e80-ba42-0d3161892739]" failed. No retries permitted until 2023-03-23 21:55:44.753033723 +0000 UTC m=+1866.892025693 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:53:42.753164 1 event.go:294] "Event occurred" object="azurefile-5280/pvc-6nt2z" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:53:44.385945 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 10 items received
I0323 21:53:44.648707 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0323 21:53:45.484757 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.9µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:60810" resp=200
I0323 21:53:51.829651 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0323 21:53:55.486339 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="90.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42290" resp=200
I0323 21:53:57.398460 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 0 items received
... skipping 99 lines ...
I0323 21:55:07.286864 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:55:07.286990 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:55:07.287206 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:55:07.287392 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:55:07.287520 1 pv_controller.go:1775] operation "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" is already running, skipping
I0323 21:55:07.286798 1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azurefile-572/pvc-qgk7x"
I0323 21:55:07.291543 1 azure_provision.go:108] failed to get azure provider
I0323 21:55:07.291695 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:55:07.291847 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:55:07.79183499 +0000 UTC m=+1829.930826860 (durationBeforeRetry 500ms). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:55:07.292068 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:55:09.381519 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 10 items received
I0323 21:55:11.276027 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-5280
I0323 21:55:11.302906 1 pv_controller_base.go:670] storeObjectUpdate updating claim "azurefile-5280/pvc-6nt2z" with version 7599
I0323 21:55:11.302931 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:55:11.302950 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-5280/pvc-6nt2z]: no volume found
I0323 21:55:11.302955 1 pv_controller.go:1455] provisionClaim[azurefile-5280/pvc-6nt2z]: started
... skipping 10 lines ...
I0323 21:55:11.313083 1 pvc_protection_controller.go:207] "Removed protection finalizer from PVC" PVC="azurefile-5280/pvc-6nt2z"
I0323 21:55:11.313184 1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-5280/pvc-6nt2z" duration="10.132149ms"
I0323 21:55:11.315836 1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azurefile-5280, name pvc-6nt2z.174f2ac387d62870, uid 0a048134-34fd-4c88-aa4f-e35604ac84e5, event type delete
I0323 21:55:11.344076 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-5280, name kube-root-ca.crt, uid e31c11de-e25a-4a60-906a-0070575a8f6c, event type delete
I0323 21:55:11.345138 1 publisher.go:186] Finished syncing namespace "azurefile-5280" (1.196806ms)
I0323 21:55:11.386054 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-5280, name default-token-64twl, uid 51651b04-e22b-488f-9c8a-0cfd3fec7a50, event type delete
E0323 21:55:11.407547 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-5280/default: secrets "default-token-swrc8" is forbidden: unable to create new content in namespace azurefile-5280 because it is being terminated
I0323 21:55:11.423376 1 tokens_controller.go:252] syncServiceAccount(azurefile-5280/default), service account deleted, removing tokens
I0323 21:55:11.423550 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5280" (1.8µs)
I0323 21:55:11.423645 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-5280, name default, uid 464ddae2-e032-494a-a07f-2af524672b6a, event type delete
I0323 21:55:11.437388 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5280" (1.7µs)
I0323 21:55:11.437702 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-5280, estimate: 15, errors: <nil>
I0323 21:55:11.437836 1 namespace_controller.go:180] Finished syncing namespace "azurefile-5280" (166.651593ms)
I0323 21:55:11.437940 1 namespace_controller.go:157] Content remaining in namespace azurefile-5280, waiting 8 seconds
I0323 21:55:11.768071 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-4585
I0323 21:55:11.796603 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-4585, name default-token-kbzxp, uid d53e53f3-2b16-4d90-9f8b-f92983afccd7, event type delete
I0323 21:55:11.807514 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-4585, name kube-root-ca.crt, uid a59b20c6-336b-4fd2-b503-97a2ea4fd74d, event type delete
I0323 21:55:11.808936 1 publisher.go:186] Finished syncing namespace "azurefile-4585" (1.553207ms)
E0323 21:55:11.809755 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-4585/default: secrets "default-token-z8pps" is forbidden: unable to create new content in namespace azurefile-4585 because it is being terminated
I0323 21:55:11.868767 1 tokens_controller.go:252] syncServiceAccount(azurefile-4585/default), service account deleted, removing tokens
I0323 21:55:11.868945 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4585" (2.1µs)
I0323 21:55:11.868962 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-4585, name default, uid 6791295a-b15a-40b7-a39e-40e2de9e3485, event type delete
I0323 21:55:11.910184 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4585" (2.1µs)
I0323 21:55:11.910315 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-4585, estimate: 0, errors: <nil>
I0323 21:55:11.921454 1 namespace_controller.go:180] Finished syncing namespace "azurefile-4585" (156.326044ms)
... skipping 3 lines ...
I0323 21:55:12.749124 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:55:12.749181 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:55:12.749207 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:55:12.749238 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:55:12.749255 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:55:12.749301 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:55:12.752901 1 azure_provision.go:108] failed to get azure provider
I0323 21:55:12.752928 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:55:12.753014 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:55:13.752984103 +0000 UTC m=+1835.891976073 (durationBeforeRetry 1s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:55:12.753112 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:55:15.484796 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="94.4µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:53074" resp=200
I0323 21:55:16.446713 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-5280
I0323 21:55:16.572896 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5280" (2.7µs)
I0323 21:55:16.573152 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-5280, estimate: 0, errors: <nil>
I0323 21:55:16.584810 1 namespace_controller.go:180] Finished syncing namespace "azurefile-5280" (147.1095ms)
I0323 21:55:16.911097 1 namespace_controller.go:185] Namespace has been deleted azurefile-4585
... skipping 14 lines ...
I0323 21:55:27.749415 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:55:27.749443 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:55:27.749451 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:55:27.749480 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:55:27.749515 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:55:27.749522 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:55:27.751718 1 azure_provision.go:108] failed to get azure provider
I0323 21:55:27.751743 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:55:27.751966 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:55:29.751757184 +0000 UTC m=+1851.890749054 (durationBeforeRetry 2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:55:27.752484 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:55:28.454767 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:55:29.021789 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:55:31.832080 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0323 21:55:34.261145 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0323 21:55:35.486309 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="95.101µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:41656" resp=200
I0323 21:55:37.481588 1 gc_controller.go:161] GC'ing orphaned
... skipping 4 lines ...
I0323 21:55:42.750145 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:55:42.750325 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:55:42.750341 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:55:42.750353 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:55:42.750415 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:55:42.750422 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:55:42.756758 1 azure_provision.go:108] failed to get azure provider
I0323 21:55:42.756780 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:55:42.756806 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:55:46.756793435 +0000 UTC m=+1868.895785305 (durationBeforeRetry 4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:55:42.756954 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:55:45.484351 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="87.6µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:42226" resp=200
I0323 21:55:54.041415 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 1 items received
I0323 21:55:54.392945 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 13 items received
I0323 21:55:54.789077 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:55:55.484780 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="98.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54680" resp=200
I0323 21:55:57.417238 1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 4 lines ...
I0323 21:55:57.750440 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:55:57.750468 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:55:57.750479 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:55:57.750490 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:55:57.750504 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:55:57.750510 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:55:57.757493 1 azure_provision.go:108] failed to get azure provider
I0323 21:55:57.757537 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:55:57.757573 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:56:05.757560292 +0000 UTC m=+1887.896552262 (durationBeforeRetry 8s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:55:57.757852 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:55:58.393038 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 0 items received
I0323 21:55:58.648393 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:55:59.037025 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:55:59.361110 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0323 21:56:05.486199 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="85.2µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:39396" resp=200
I0323 21:56:08.466506 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 3 lines ...
I0323 21:56:12.751660 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:56:12.751696 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:56:12.751709 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:56:12.751721 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:56:12.751746 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:56:12.751756 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:56:12.760563 1 azure_provision.go:108] failed to get azure provider
I0323 21:56:12.760588 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:56:12.760616 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:56:28.760603889 +0000 UTC m=+1910.899595859 (durationBeforeRetry 16s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:56:12.760867 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:56:15.484924 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="91.2µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45474" resp=200
I0323 21:56:17.482772 1 gc_controller.go:161] GC'ing orphaned
I0323 21:56:17.482859 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:56:22.649040 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:56:24.359829 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 3 items received
I0323 21:56:25.484494 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="80.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45316" resp=200
... skipping 20 lines ...
I0323 21:56:42.753677 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:56:42.753750 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:56:42.753763 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:56:42.753783 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:56:42.753847 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:56:42.753891 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:56:42.761218 1 azure_provision.go:108] failed to get azure provider
I0323 21:56:42.761359 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:56:42.761457 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:57:14.761397668 +0000 UTC m=+1956.900389538 (durationBeforeRetry 32s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:56:42.761525 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:56:45.484423 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44276" resp=200
I0323 21:56:46.383537 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 2 items received
I0323 21:56:46.912189 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0323 21:56:51.711356 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 3 items received
I0323 21:56:52.002004 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta2.PriorityLevelConfiguration total 1 items received
I0323 21:56:55.484279 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="90.501µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:43784" resp=200
... skipping 32 lines ...
I0323 21:57:27.755987 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:57:27.756059 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:57:27.756089 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:57:27.756127 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:57:27.756179 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:57:27.756237 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:57:27.771373 1 azure_provision.go:108] failed to get azure provider
I0323 21:57:27.771395 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:57:27.771438 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 21:58:31.77142663 +0000 UTC m=+2033.910418500 (durationBeforeRetry 1m4s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:57:27.771517 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:57:29.095064 1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0323 21:57:32.790734 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:57:35.484819 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="103.2µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:52038" resp=200
I0323 21:57:37.485678 1 gc_controller.go:161] GC'ing orphaned
I0323 21:57:37.485713 1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0323 21:57:40.469942 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
... skipping 58 lines ...
I0323 21:58:42.759874 1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0323 21:58:42.759938 1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azurefile-572/pvc-qgk7x]: no volume found
I0323 21:58:42.759951 1 pv_controller.go:1455] provisionClaim[azurefile-572/pvc-qgk7x]: started
I0323 21:58:42.759974 1 pv_controller.go:1764] scheduleOperation[provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]]
I0323 21:58:42.760029 1 pv_controller.go:1496] provisionClaimOperation [azurefile-572/pvc-qgk7x] started, class: "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7"
I0323 21:58:42.760075 1 pv_controller.go:1511] provisionClaimOperation [azurefile-572/pvc-qgk7x]: plugin name: kubernetes.io/azure-file, provisioner name: kubernetes.io/azure-file
I0323 21:58:42.769519 1 azure_provision.go:108] failed to get azure provider
I0323 21:58:42.769545 1 pv_controller.go:1577] failed to create provisioner for claim "azurefile-572/pvc-qgk7x" with StorageClass "azurefile-572-kubernetes.io-azure-file-dynamic-sc-n9qf7": failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
E0323 21:58:42.769596 1 goroutinemap.go:150] Operation for "provision-azurefile-572/pvc-qgk7x[a0713b29-2dd4-473e-b1d5-1c5c124286d1]" failed. No retries permitted until 2023-03-23 22:00:44.769581023 +0000 UTC m=+2166.908572893 (durationBeforeRetry 2m2s). Error: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
I0323 21:58:42.769973 1 event.go:294] "Event occurred" object="azurefile-572/pvc-qgk7x" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to create provisioner: failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead"
I0323 21:58:45.484746 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="91.003µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57548" resp=200
I0323 21:58:46.267081 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0323 21:58:53.538353 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0323 21:58:54.465961 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 21:58:55.484414 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="82.703µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:35970" resp=200
I0323 21:58:56.617981 1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 2 items received
... skipping 134 lines ...
I0323 22:00:13.320358 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=persistentvolumeclaims, namespace azurefile-572, name pvc-qgk7x, uid a0713b29-2dd4-473e-b1d5-1c5c124286d1, event type delete
I0323 22:00:13.320212 1 pvc_protection_controller.go:207] "Removed protection finalizer from PVC" PVC="azurefile-572/pvc-qgk7x"
I0323 22:00:13.320466 1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-572/pvc-qgk7x" duration="14.614533ms"
I0323 22:00:13.371378 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-572, name kube-root-ca.crt, uid de3e4e6e-f298-4864-abcf-38c875b25946, event type delete
I0323 22:00:13.372414 1 publisher.go:186] Finished syncing namespace "azurefile-572" (1.180535ms)
I0323 22:00:13.377463 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-572, name default-token-t2fps, uid 11317a52-6d84-4bc6-9f76-a6332c8ba57c, event type delete
E0323 22:00:13.393178 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-572/default: secrets "default-token-vc2x9" is forbidden: unable to create new content in namespace azurefile-572 because it is being terminated
I0323 22:00:13.393487 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-572, name default, uid 1fc7e941-c8f3-49ea-a24e-caf83df0aa44, event type delete
I0323 22:00:13.393676 1 tokens_controller.go:252] syncServiceAccount(azurefile-572/default), service account deleted, removing tokens
I0323 22:00:13.393739 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-572" (3µs)
I0323 22:00:13.503582 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-572" (2.7µs)
I0323 22:00:13.504209 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-572, estimate: 15, errors: <nil>
I0323 22:00:13.504407 1 namespace_controller.go:180] Finished syncing namespace "azurefile-572" (272.886681ms)
... skipping 4 lines ...
I0323 22:00:13.709641 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-7680
I0323 22:00:13.753985 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-7680, name kube-root-ca.crt, uid 96d1de99-54ac-45f0-a7f1-d9536def278d, event type delete
I0323 22:00:13.755197 1 publisher.go:186] Finished syncing namespace "azurefile-7680" (1.336039ms)
I0323 22:00:13.793179 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-7680, name default-token-5l9pg, uid 89cb43d0-fde6-42ca-8bf6-8a8e6931e4ee, event type delete
I0323 22:00:13.802599 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7680" (2.2µs)
I0323 22:00:13.802646 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-7680, name default, uid 87cc79d6-d2b0-43c8-88ed-8895030f532b, event type delete
E0323 22:00:13.803266 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-7680/default: serviceaccounts "default" not found
I0323 22:00:13.803312 1 tokens_controller.go:252] syncServiceAccount(azurefile-7680/default), service account deleted, removing tokens
I0323 22:00:13.809144 1 tokens_controller.go:252] syncServiceAccount(azurefile-7680/default), service account deleted, removing tokens
I0323 22:00:13.841895 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-7680" (2.1µs)
I0323 22:00:13.842065 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-7680, estimate: 0, errors: <nil>
I0323 22:00:13.855288 1 namespace_controller.go:180] Finished syncing namespace "azurefile-7680" (147.602372ms)
I0323 22:00:13.974401 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8337" (2.5µs)
... skipping 78 lines ...
I0323 22:00:16.787449 1 namespace_controller.go:180] Finished syncing namespace "azurefile-2003" (166.661636ms)
I0323 22:00:16.806581 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1614" (1.9µs)
I0323 22:00:16.854027 1 publisher.go:186] Finished syncing namespace "azurefile-5431" (6.489592ms)
I0323 22:00:16.857262 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-5431" (9.79119ms)
I0323 22:00:17.105676 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8459
I0323 22:00:17.144659 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8459, name default-token-hffql, uid b361851c-e5cb-4d00-aa25-57f640c2ebe9, event type delete
E0323 22:00:17.156446 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8459/default: secrets "default-token-995bg" is forbidden: unable to create new content in namespace azurefile-8459 because it is being terminated
I0323 22:00:17.165114 1 tokens_controller.go:252] syncServiceAccount(azurefile-8459/default), service account deleted, removing tokens
I0323 22:00:17.165236 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8459" (1.8µs)
I0323 22:00:17.165326 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8459, name default, uid ea261537-cd11-44e1-ba9d-1f94efb94e50, event type delete
I0323 22:00:17.239691 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-8459, name kube-root-ca.crt, uid cae41e21-682d-4d43-a7e8-7fc51116c74b, event type delete
I0323 22:00:17.240601 1 publisher.go:186] Finished syncing namespace "azurefile-8459" (1.02483ms)
I0323 22:00:17.260725 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-8459, estimate: 0, errors: <nil>
... skipping 32 lines ...
I0323 22:00:18.264080 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1694" (9.649586ms)
I0323 22:00:18.508095 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-572
I0323 22:00:18.519466 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-4880
I0323 22:00:18.609893 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-4880, name kube-root-ca.crt, uid df78daf9-f32e-4626-b949-1d7032236a1c, event type delete
I0323 22:00:18.615812 1 publisher.go:186] Finished syncing namespace "azurefile-4880" (6.044979ms)
I0323 22:00:18.652103 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-4880, name default-token-lkpxp, uid a3a2b4ca-7e95-4934-af5c-953e245d3a60, event type delete
E0323 22:00:18.665989 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-4880/default: serviceaccounts "default" not found
I0323 22:00:18.666174 1 tokens_controller.go:252] syncServiceAccount(azurefile-4880/default), service account deleted, removing tokens
I0323 22:00:18.666283 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4880" (1.8µs)
I0323 22:00:18.666309 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-4880, name default, uid 60624e1e-9c6d-4739-a694-9a6c63991b22, event type delete
I0323 22:00:18.671038 1 tokens_controller.go:252] syncServiceAccount(azurefile-4880/default), service account deleted, removing tokens
I0323 22:00:18.678991 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-572, estimate: 0, errors: <nil>
I0323 22:00:18.679693 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-572" (1.3µs)
... skipping 7 lines ...
I0323 22:00:18.843243 1 namespace_controller.go:185] Namespace has been deleted azurefile-7680
I0323 22:00:18.843265 1 namespace_controller.go:180] Finished syncing namespace "azurefile-7680" (41.702µs)
I0323 22:00:18.978079 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-8337
I0323 22:00:19.004773 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-8337, name kube-root-ca.crt, uid 1419e712-b9bc-49f6-b446-e9c62846bfc3, event type delete
I0323 22:00:19.006296 1 publisher.go:186] Finished syncing namespace "azurefile-8337" (1.629549ms)
I0323 22:00:19.075795 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-8337, name default-token-xgmj5, uid 174d3a7b-2e22-4ee7-8e67-2d9944926e5c, event type delete
E0323 22:00:19.088634 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8337/default: secrets "default-token-dkdrd" is forbidden: unable to create new content in namespace azurefile-8337 because it is being terminated
I0323 22:00:19.097317 1 tokens_controller.go:252] syncServiceAccount(azurefile-8337/default), service account deleted, removing tokens
I0323 22:00:19.097353 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8337" (2.7µs)
I0323 22:00:19.097474 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-8337, name default, uid 60f0a1a7-99a6-4a7c-b8d2-fb8b0b88958f, event type delete
I0323 22:00:19.137916 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-8337, estimate: 0, errors: <nil>
I0323 22:00:19.141262 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-8337" (1.9µs)
I0323 22:00:19.146475 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9364" (2.2µs)
... skipping 16 lines ...
I0323 22:00:19.666167 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6399" (4.361829ms)
I0323 22:00:19.668392 1 publisher.go:186] Finished syncing namespace "azurefile-6399" (6.41439ms)
I0323 22:00:19.846224 1 namespace_controller.go:185] Namespace has been deleted azurefile-9832
I0323 22:00:19.846249 1 namespace_controller.go:180] Finished syncing namespace "azurefile-9832" (50.402µs)
I0323 22:00:19.955606 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1578
I0323 22:00:19.989313 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-1578, name default-token-bb4b7, uid 3e354842-21a5-4a84-bd17-88f72c832ef4, event type delete
E0323 22:00:19.999926 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-1578/default: secrets "default-token-khpw2" is forbidden: unable to create new content in namespace azurefile-1578 because it is being terminated
I0323 22:00:20.028670 1 tokens_controller.go:252] syncServiceAccount(azurefile-1578/default), service account deleted, removing tokens
I0323 22:00:20.029038 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1578" (2.901µs)
I0323 22:00:20.029057 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-1578, name default, uid 35f4fbce-358d-4038-8651-760211a34bc9, event type delete
I0323 22:00:20.036645 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-1578, name kube-root-ca.crt, uid 5988b197-ddee-4fe8-9924-39750fd16aca, event type delete
I0323 22:00:20.037759 1 publisher.go:186] Finished syncing namespace "azurefile-1578" (1.223363ms)
I0323 22:00:20.092624 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1578" (2.6µs)
... skipping 3 lines ...
I0323 22:00:20.150634 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2802" (10.887959ms)
I0323 22:00:20.327182 1 namespace_controller.go:185] Namespace has been deleted azurefile-4929
I0323 22:00:20.327208 1 namespace_controller.go:180] Finished syncing namespace "azurefile-4929" (47.502µs)
I0323 22:00:20.500862 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3699
I0323 22:00:20.501558 1 publisher.go:186] Finished syncing namespace "azurefile-2802" (361.618177ms)
I0323 22:00:20.560660 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-3699, name default-token-k4mx5, uid c38c6e7d-42c8-48be-adbc-173ac7d5045a, event type delete
E0323 22:00:20.574169 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3699/default: secrets "default-token-t5qbh" is forbidden: unable to create new content in namespace azurefile-3699 because it is being terminated
I0323 22:00:20.596914 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-3699, name kube-root-ca.crt, uid 8ddb6d68-b7e4-4c9a-95dd-ef9f64044f42, event type delete
I0323 22:00:20.598204 1 publisher.go:186] Finished syncing namespace "azurefile-3699" (1.35587ms)
I0323 22:00:20.612573 1 tokens_controller.go:252] syncServiceAccount(azurefile-3699/default), service account deleted, removing tokens
I0323 22:00:20.612745 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3699" (3.2µs)
I0323 22:00:20.612765 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-3699, name default, uid 6a01859b-6e20-4af7-ad45-04b42eba316d, event type delete
I0323 22:00:20.646106 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3699" (1.6µs)
... skipping 4 lines ...
I0323 22:00:20.799296 1 namespace_controller.go:180] Finished syncing namespace "azurefile-613" (247.413µs)
I0323 22:00:20.805189 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0323 22:00:20.809089 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3796" (5.397977ms)
I0323 22:00:20.811238 1 publisher.go:186] Finished syncing namespace "azurefile-3796" (7.533987ms)
I0323 22:00:20.898301 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6625
I0323 22:00:20.959353 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-6625, name default-token-jk9xc, uid 87e1e249-f8b5-4fd7-b618-3602a682eb7e, event type delete
E0323 22:00:20.971100 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-6625/default: secrets "default-token-8dp2q" is forbidden: unable to create new content in namespace azurefile-6625 because it is being terminated
I0323 22:00:20.983236 1 tokens_controller.go:252] syncServiceAccount(azurefile-6625/default), service account deleted, removing tokens
I0323 22:00:20.983276 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6625" (2µs)
I0323 22:00:20.983295 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-6625, name default, uid 0f2fdd2a-7681-4cae-8bbd-fe685289c915, event type delete
I0323 22:00:21.001103 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-6625, name kube-root-ca.crt, uid df0fa0b6-32ed-4ec7-b47b-8f2466e6d591, event type delete
I0323 22:00:21.002631 1 publisher.go:186] Finished syncing namespace "azurefile-6625" (1.689687ms)
I0323 22:00:21.050481 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6625" (3.5µs)
... skipping 5 lines ...
I0323 22:00:21.285895 1 namespace_controller.go:185] Namespace has been deleted azurefile-2999
I0323 22:00:21.285913 1 namespace_controller.go:180] Finished syncing namespace "azurefile-2999" (35.302µs)
I0323 22:00:21.358582 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-3561
I0323 22:00:21.397651 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-3561, name kube-root-ca.crt, uid 507da902-902a-4151-8abd-741b856c4a37, event type delete
I0323 22:00:21.399299 1 publisher.go:186] Finished syncing namespace "azurefile-3561" (1.657585ms)
I0323 22:00:21.407075 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-3561, name default-token-d55fp, uid bbff24d5-1655-4604-a788-24fad64932bb, event type delete
E0323 22:00:21.426220 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3561/default: secrets "default-token-jrdz5" is forbidden: unable to create new content in namespace azurefile-3561 because it is being terminated
I0323 22:00:21.431411 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3561" (2.6µs)
I0323 22:00:21.431449 1 tokens_controller.go:252] syncServiceAccount(azurefile-3561/default), service account deleted, removing tokens
I0323 22:00:21.431504 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-3561, name default, uid 7cf828be-d799-4bd0-9892-36ad47f10b2e, event type delete
I0323 22:00:21.501303 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-3561" (2.7µs)
I0323 22:00:21.501596 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-3561, estimate: 0, errors: <nil>
I0323 22:00:21.511509 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3561" (158.559473ms)
I0323 22:00:21.688858 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2630" (2.5µs)
I0323 22:00:21.778176 1 namespace_controller.go:185] Namespace has been deleted azurefile-2003
I0323 22:00:21.778572 1 namespace_controller.go:180] Finished syncing namespace "azurefile-2003" (451.723µs)
I0323 22:00:21.809811 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1614
I0323 22:00:21.828137 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-1614, name default-token-ghlsq, uid 310a05b3-52ed-46d6-ae55-4fc038a5e3c7, event type delete
E0323 22:00:21.840230 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-1614/default: secrets "default-token-r42kp" is forbidden: unable to create new content in namespace azurefile-1614 because it is being terminated
I0323 22:00:21.922184 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-1614, name kube-root-ca.crt, uid 012f3aec-f4c1-4bfa-bc36-6b657c9f692d, event type delete
I0323 22:00:21.923869 1 publisher.go:186] Finished syncing namespace "azurefile-1614" (1.863696ms)
I0323 22:00:21.975407 1 tokens_controller.go:252] syncServiceAccount(azurefile-1614/default), service account deleted, removing tokens
I0323 22:00:21.975527 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1614" (2.4µs)
I0323 22:00:21.975606 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-1614, name default, uid f481e81a-75c8-4e48-b29a-5c0008cca921, event type delete
I0323 22:00:21.992106 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1614" (2.3µs)
... skipping 13 lines ...
I0323 22:00:22.406389 1 namespace_controller.go:180] Finished syncing namespace "azurefile-5431" (134.221312ms)
I0323 22:00:22.456773 1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0323 22:00:22.704627 1 namespace_controller.go:185] Namespace has been deleted azurefile-3054
I0323 22:00:22.704654 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3054" (51.603µs)
I0323 22:00:22.757383 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-4325
I0323 22:00:22.828668 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-4325, name default-token-pzqcz, uid 717606bb-afa9-4029-994e-025b46d6a29c, event type delete
E0323 22:00:22.840967 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-4325/default: secrets "default-token-tqssr" is forbidden: unable to create new content in namespace azurefile-4325 because it is being terminated
I0323 22:00:22.859527 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-4325, name kube-root-ca.crt, uid b8c3b094-1267-4652-abe2-b3d55252623e, event type delete
I0323 22:00:22.861674 1 publisher.go:186] Finished syncing namespace "azurefile-4325" (2.178912ms)
I0323 22:00:22.869714 1 tokens_controller.go:252] syncServiceAccount(azurefile-4325/default), service account deleted, removing tokens
I0323 22:00:22.869842 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4325" (2.4µs)
I0323 22:00:22.869860 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-4325, name default, uid d707fe81-5588-455e-8cab-5ab00b5f074b, event type delete
I0323 22:00:22.900426 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4325" (2.201µs)
... skipping 2 lines ...
I0323 22:00:23.193282 1 namespace_controller.go:185] Namespace has been deleted azurefile-9308
I0323 22:00:23.193303 1 namespace_controller.go:180] Finished syncing namespace "azurefile-9308" (45.702µs)
I0323 22:00:23.215326 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-4812
I0323 22:00:23.252762 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-4812, name kube-root-ca.crt, uid 24c1bf8a-e975-41e3-aefa-e3af69b9dee2, event type delete
I0323 22:00:23.254836 1 publisher.go:186] Finished syncing namespace "azurefile-4812" (2.235915ms)
I0323 22:00:23.319295 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-4812, name default-token-k4w7h, uid 13b1248a-65bc-4306-8138-41284c389a40, event type delete
E0323 22:00:23.334993 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-4812/default: secrets "default-token-5k6mf" is forbidden: unable to create new content in namespace azurefile-4812 because it is being terminated
I0323 22:00:23.366293 1 tokens_controller.go:252] syncServiceAccount(azurefile-4812/default), service account deleted, removing tokens
I0323 22:00:23.366500 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4812" (3.7µs)
I0323 22:00:23.366630 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-4812, name default, uid 91c7c062-268f-48f6-95dc-cbb382db38a3, event type delete
I0323 22:00:23.401887 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-4812" (2.6µs)
I0323 22:00:23.402792 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-4812, estimate: 0, errors: <nil>
I0323 22:00:23.456234 1 namespace_controller.go:180] Finished syncing namespace "azurefile-4812" (243.620537ms)
... skipping 2 lines ...
I0323 22:00:23.698316 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1694
I0323 22:00:23.711844 1 namespace_controller.go:185] Namespace has been deleted azurefile-4880
I0323 22:00:23.711914 1 namespace_controller.go:180] Finished syncing namespace "azurefile-4880" (83.804µs)
I0323 22:00:23.716931 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-1694, name kube-root-ca.crt, uid 3e7a4ee6-e91d-4566-b469-8106602ab0f9, event type delete
I0323 22:00:23.718458 1 publisher.go:186] Finished syncing namespace "azurefile-1694" (1.641185ms)
I0323 22:00:23.737420 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-1694, name default-token-pv7j7, uid 9ebf22eb-6426-4997-82b4-7b5ee2fea2de, event type delete
E0323 22:00:23.752930 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-1694/default: secrets "default-token-w2zfc" is forbidden: unable to create new content in namespace azurefile-1694 because it is being terminated
I0323 22:00:23.793504 1 publisher.go:186] Finished syncing namespace "azurefile-9902" (15.476497ms)
I0323 22:00:23.795293 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9902" (17.424296ms)
I0323 22:00:23.825617 1 tokens_controller.go:252] syncServiceAccount(azurefile-1694/default), service account deleted, removing tokens
I0323 22:00:23.825649 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1694" (1.4µs)
I0323 22:00:23.825667 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-1694, name default, uid 4e6e45d5-26a4-43c0-bd42-fd6a03149ab8, event type delete
I0323 22:00:23.890074 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-1694, estimate: 0, errors: <nil>
I0323 22:00:23.890099 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-1694" (3.3µs)
I0323 22:00:23.903969 1 namespace_controller.go:180] Finished syncing namespace "azurefile-1694" (208.304219ms)
I0323 22:00:24.141841 1 namespace_controller.go:185] Namespace has been deleted azurefile-8337
I0323 22:00:24.141865 1 namespace_controller.go:180] Finished syncing namespace "azurefile-8337" (50.703µs)
I0323 22:00:24.149806 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-9364
I0323 22:00:24.167363 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-9364, name default-token-bng56, uid a89de7bb-2e41-497d-a6e4-07f3c3bf00ca, event type delete
E0323 22:00:24.181669 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9364/default: secrets "default-token-cqvjv" is forbidden: unable to create new content in namespace azurefile-9364 because it is being terminated
I0323 22:00:24.209173 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9902" (1.8µs)
I0323 22:00:24.213901 1 tokens_controller.go:252] syncServiceAccount(azurefile-9364/default), service account deleted, removing tokens
I0323 22:00:24.213941 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-9364" (2.2µs)
I0323 22:00:24.213982 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-9364, name default, uid e5baee5a-ea6b-46bf-9a59-8c5e8c809237, event type delete
I0323 22:00:24.276993 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-9364, name kube-root-ca.crt, uid 62b7e893-ff4d-4e12-a9fe-1214ff8ea50e, event type delete
I0323 22:00:24.279030 1 publisher.go:186] Finished syncing namespace "azurefile-9364" (2.162511ms)
... skipping 15 lines ...
I0323 22:00:25.093785 1 namespace_controller.go:185] Namespace has been deleted azurefile-1578
I0323 22:00:25.093805 1 namespace_controller.go:180] Finished syncing namespace "azurefile-1578" (50.903µs)
I0323 22:00:25.099775 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-6399
I0323 22:00:25.136079 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-6399, name kube-root-ca.crt, uid b856329b-1d63-48fd-b60a-843c3de46099, event type delete
I0323 22:00:25.137827 1 publisher.go:186] Finished syncing namespace "azurefile-6399" (1.890897ms)
I0323 22:00:25.174585 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-6399, name default-token-w9z76, uid 915ad82c-108b-466c-a818-d2b64c1bcc7d, event type delete
E0323 22:00:25.189778 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-6399/default: secrets "default-token-2g848" is forbidden: unable to create new content in namespace azurefile-6399 because it is being terminated
I0323 22:00:25.196463 1 tokens_controller.go:252] syncServiceAccount(azurefile-6399/default), service account deleted, removing tokens
I0323 22:00:25.196643 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6399" (2.1µs)
I0323 22:00:25.196770 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-6399, name default, uid f93908b3-d80b-4656-909b-d655f98f247a, event type delete
I0323 22:00:25.282134 1 namespaced_resources_deleter.go:556] namespace controller - deleteAllContent - namespace: azurefile-6399, estimate: 0, errors: <nil>
I0323 22:00:25.282175 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-6399" (3.3µs)
I0323 22:00:25.306561 1 namespace_controller.go:180] Finished syncing namespace "azurefile-6399" (209.576467ms)
I0323 22:00:25.483839 1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="109.805µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:57412" resp=200
I0323 22:00:25.647043 1 namespace_controller.go:185] Namespace has been deleted azurefile-3699
I0323 22:00:25.647094 1 namespace_controller.go:180] Finished syncing namespace "azurefile-3699" (150.508µs)
I0323 22:00:25.771368 1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-2802
I0323 22:00:25.814986 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azurefile-2802, name default-token-rrhks, uid 778fb5eb-8ca3-4d7d-b26b-2943a9e8b9e6, event type delete
I0323 22:00:25.827113 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azurefile-2802, name kube-root-ca.crt, uid 1b59a29a-d239-44d1-ab01-3f07c79027d8, event type delete
I0323 22:00:25.829124 1 publisher.go:186] Finished syncing namespace "azurefile-2802" (2.199513ms)
E0323 22:00:25.830753 1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-2802/default: secrets "default-token-gv9m5" is forbidden: unable to create new content in namespace azurefile-2802 because it is being terminated
I0323 22:00:25.845967 1 tokens_controller.go:252] syncServiceAccount(azurefile-2802/default), service account deleted, removing tokens
I0323 22:00:25.846130 1 serviceaccounts_controller.go:188] Finished syncing namespace "azurefile-2802" (1.9µs)
I0323 22:00:25.846247 1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azurefile-2802, name default, uid 6165b497-7c96-4bd4-b453-46bac267f84d, event type delete
2023/03/23 22:00:26 ===================================================
[38;5;243m------------------------------[0m
[38;5;10m[AfterSuite] PASSED [1.913 seconds][0m
[AfterSuite]
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 6 Failures:[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[38;5;9m[1m[It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows][0m[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/testsuites/testsuites.go:221[0m
[38;5;9m[1mRan 6 of 39 Specs in 1829.534 seconds[0m
[38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m6 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m33 Skipped[0m
[38;5;228mYou're using deprecated Ginkgo functionality:[0m
[38;5;228m=============================================[0m
[38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m
[1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m
[38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m
[38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m
--- FAIL: TestE2E (1829.54s)
FAIL
FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 1829.609s
FAIL
make: *** [Makefile:85: e2e-test] Error 1
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
capz-7bsgqo-control-plane-78g6l Ready control-plane,master 35m v1.23.18-rc.0.7+1635c380b26a1d 10.0.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1104-azure containerd://1.6.18
capz-7bsgqo-md-0-9mzmk Ready <none> 33m v1.23.18-rc.0.7+1635c380b26a1d 10.1.0.5 <none> Ubuntu 18.04.6 LTS 5.4.0-1104-azure containerd://1.6.18
capz-7bsgqo-md-0-gzp2s Ready <none> 33m v1.23.18-rc.0.7+1635c380b26a1d 10.1.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1104-azure containerd://1.6.18
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-57bb57f4c5-7mmxs 1/1 Running 0 34m 192.168.169.199 capz-7bsgqo-control-plane-78g6l <none> <none>
... skipping 164 lines ...