This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-07 09:26
Elapsed44m8s
Revision
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 702 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 141 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-4ay9k6-kubeconfig; do sleep 1; done"
capz-4ay9k6-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-4ay9k6-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-4ay9k6-control-plane-kk9g2   NotReady   control-plane   1s    v1.26.0-alpha.0.393+4b9575acb84a72
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-4ay9k6-control-plane-kk9g2 condition met
node/capz-4ay9k6-md-0-b8ndn condition met
... skipping 62 lines ...
Dynamic Provisioning [single-az] 
  should create a volume on demand with mount options [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:101
STEP: Creating a kubernetes client
Sep  7 09:41:07.755: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
Sep  7 09:41:08.090: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/07 09:41:08 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/07 09:41:08 Check successfully
Sep  7 09:41:08.568: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 09:41:08.675: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-fs48x" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  7 09:41:08.712: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 37.09775ms
Sep  7 09:41:10.748: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072942436s
Sep  7 09:41:12.784: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109305429s
Sep  7 09:41:14.854: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179207383s
Sep  7 09:41:16.889: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21361978s
Sep  7 09:41:18.924: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.248799869s
Sep  7 09:41:20.960: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.284943663s
Sep  7 09:41:22.995: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.319947651s
Sep  7 09:41:25.035: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 16.359568413s
Sep  7 09:41:27.071: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 18.395933031s
Sep  7 09:41:29.109: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Pending", Reason="", readiness=false. Elapsed: 20.43342397s
Sep  7 09:41:31.145: INFO: Pod "azuredisk-volume-tester-fs48x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.470279341s
STEP: Saw pod success
Sep  7 09:41:31.145: INFO: Pod "azuredisk-volume-tester-fs48x" satisfied condition "Succeeded or Failed"
Sep  7 09:41:31.145: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-fs48x"
Sep  7 09:41:31.194: INFO: Pod azuredisk-volume-tester-fs48x has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-fs48x in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  7 09:42:26.947: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-gwxqw"
Sep  7 09:42:26.994: INFO: Error getting logs for pod azuredisk-volume-tester-gwxqw: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-gwxqw)
STEP: Deleting pod azuredisk-volume-tester-gwxqw in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 09:42:27.096: INFO: deleting PVC "azuredisk-5466"/"pvc-q88vh"
Sep  7 09:42:27.097: INFO: Deleting PersistentVolumeClaim "pvc-q88vh"
STEP: waiting for claim's PV "pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062" to be deleted
... skipping 58 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 09:44:54.242: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-m2g4j" in namespace "azuredisk-2888" to be "Succeeded or Failed"
Sep  7 09:44:54.276: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 33.392803ms
Sep  7 09:44:56.312: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069570436s
Sep  7 09:44:58.347: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105099229s
Sep  7 09:45:00.382: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13993655s
Sep  7 09:45:02.418: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175462334s
Sep  7 09:45:04.452: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20985114s
... skipping 3 lines ...
Sep  7 09:45:12.591: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 18.348568224s
Sep  7 09:45:14.626: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 20.383731073s
Sep  7 09:45:16.663: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 22.420381429s
Sep  7 09:45:18.700: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 24.457924425s
Sep  7 09:45:20.736: INFO: Pod "azuredisk-volume-tester-m2g4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.493447007s
STEP: Saw pod success
Sep  7 09:45:20.736: INFO: Pod "azuredisk-volume-tester-m2g4j" satisfied condition "Succeeded or Failed"
Sep  7 09:45:20.736: INFO: deleting Pod "azuredisk-2888"/"azuredisk-volume-tester-m2g4j"
Sep  7 09:45:20.781: INFO: Pod azuredisk-volume-tester-m2g4j has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-m2g4j in namespace azuredisk-2888
STEP: validating provisioned PV
STEP: checking the PV
... skipping 36 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  7 09:45:41.984: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qhvlt" in namespace "azuredisk-5429" to be "Error status code"
Sep  7 09:45:42.017: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 33.332943ms
Sep  7 09:45:44.052: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068103134s
Sep  7 09:45:46.087: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103002029s
Sep  7 09:45:48.122: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138098376s
Sep  7 09:45:50.157: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172914525s
Sep  7 09:45:52.191: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207315887s
Sep  7 09:45:54.227: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.242974768s
Sep  7 09:45:56.263: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.279381339s
Sep  7 09:45:58.300: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.315690043s
Sep  7 09:46:00.334: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.349882496s
Sep  7 09:46:02.370: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 20.386426539s
Sep  7 09:46:04.406: INFO: Pod "azuredisk-volume-tester-qhvlt": Phase="Failed", Reason="", readiness=false. Elapsed: 22.422170954s
STEP: Saw pod failure
Sep  7 09:46:04.407: INFO: Pod "azuredisk-volume-tester-qhvlt" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 09:46:04.452: INFO: deleting Pod "azuredisk-5429"/"azuredisk-volume-tester-qhvlt"
Sep  7 09:46:04.489: INFO: Pod azuredisk-volume-tester-qhvlt has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-qhvlt in namespace azuredisk-5429
STEP: validating provisioned PV
... skipping 377 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 09:53:22.085: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-l7kxz" in namespace "azuredisk-9241" to be "Succeeded or Failed"
Sep  7 09:53:22.119: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 33.709494ms
Sep  7 09:53:24.154: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068584911s
Sep  7 09:53:26.200: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114330178s
Sep  7 09:53:28.235: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149992104s
Sep  7 09:53:30.272: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186385242s
Sep  7 09:53:32.307: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221730873s
... skipping 9 lines ...
Sep  7 09:53:52.670: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.585087552s
Sep  7 09:53:54.706: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 32.621144933s
Sep  7 09:53:56.743: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 34.657488063s
Sep  7 09:53:58.779: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Pending", Reason="", readiness=false. Elapsed: 36.693312902s
Sep  7 09:54:00.814: INFO: Pod "azuredisk-volume-tester-l7kxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.728777893s
STEP: Saw pod success
Sep  7 09:54:00.814: INFO: Pod "azuredisk-volume-tester-l7kxz" satisfied condition "Succeeded or Failed"
Sep  7 09:54:00.814: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-l7kxz"
Sep  7 09:54:00.860: INFO: Pod azuredisk-volume-tester-l7kxz has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-l7kxz in namespace azuredisk-9241
... skipping 67 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 09:54:42.550: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-sngh8" in namespace "azuredisk-9336" to be "Succeeded or Failed"
Sep  7 09:54:42.585: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.568046ms
Sep  7 09:54:44.619: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069209845s
Sep  7 09:54:46.653: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103043051s
Sep  7 09:54:48.687: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137525489s
Sep  7 09:54:50.721: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171921085s
Sep  7 09:54:52.756: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206344765s
... skipping 6 lines ...
Sep  7 09:55:06.998: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.44839462s
Sep  7 09:55:09.034: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.48483751s
Sep  7 09:55:11.070: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.520925241s
Sep  7 09:55:13.107: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Running", Reason="", readiness=false. Elapsed: 30.557287976s
Sep  7 09:55:15.142: INFO: Pod "azuredisk-volume-tester-sngh8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.592692458s
STEP: Saw pod success
Sep  7 09:55:15.142: INFO: Pod "azuredisk-volume-tester-sngh8" satisfied condition "Succeeded or Failed"
Sep  7 09:55:15.142: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-sngh8"
Sep  7 09:55:15.178: INFO: Pod azuredisk-volume-tester-sngh8 has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.064089 seconds, 1.5GB/s
hello world

... skipping 116 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 09:55:57.906: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-jjpzf" in namespace "azuredisk-8591" to be "Succeeded or Failed"
Sep  7 09:55:57.943: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 36.647516ms
Sep  7 09:55:59.976: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069962509s
Sep  7 09:56:02.012: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106423848s
Sep  7 09:56:04.049: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142793337s
Sep  7 09:56:06.085: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178977307s
Sep  7 09:56:08.121: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.21506176s
... skipping 6 lines ...
Sep  7 09:56:22.372: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.465614908s
Sep  7 09:56:24.408: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 26.501731755s
Sep  7 09:56:26.443: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 28.537410131s
Sep  7 09:56:28.480: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Pending", Reason="", readiness=false. Elapsed: 30.573615372s
Sep  7 09:56:30.515: INFO: Pod "azuredisk-volume-tester-jjpzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.609080496s
STEP: Saw pod success
Sep  7 09:56:30.515: INFO: Pod "azuredisk-volume-tester-jjpzf" satisfied condition "Succeeded or Failed"
Sep  7 09:56:30.515: INFO: deleting Pod "azuredisk-8591"/"azuredisk-volume-tester-jjpzf"
Sep  7 09:56:30.551: INFO: Pod azuredisk-volume-tester-jjpzf has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-jjpzf in namespace azuredisk-8591
STEP: validating provisioned PV
STEP: checking the PV
... skipping 425 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  7 10:00:46.057: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.317 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 242 lines ...
I0907 09:36:17.830275       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662543376\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662543376\" (2022-09-07 08:36:15 +0000 UTC to 2023-09-07 08:36:15 +0000 UTC (now=2022-09-07 09:36:17.83024584 +0000 UTC))"
I0907 09:36:17.830530       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662543377\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662543376\" (2022-09-07 08:36:16 +0000 UTC to 2023-09-07 08:36:16 +0000 UTC (now=2022-09-07 09:36:17.830505033 +0000 UTC))"
I0907 09:36:17.830560       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0907 09:36:17.830871       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0907 09:36:17.831616       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0907 09:36:17.832086       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0907 09:36:22.831644       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0907 09:36:22.831734       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0907 09:36:27.173882       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0907 09:36:27.174281       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-4ay9k6-control-plane-kk9g2_c5827703-7a50-4ad0-9bc2-e5d1ac8de76e became leader"
W0907 09:36:27.242965       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0907 09:36:27.243580       1 azure_auth.go:232] Using AzurePublicCloud environment
I0907 09:36:27.243627       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0907 09:36:27.243710       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0907 09:36:27.245663       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0907 09:36:27.245700       1 reflector.go:221] Starting reflector *v1.Secret (12h44m59.679485398s) from vendor/k8s.io/client-go/informers/factory.go:134
I0907 09:36:27.245784       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0907 09:36:27.245533       1 shared_informer.go:255] Waiting for caches to sync for tokens
I0907 09:36:27.245517       1 reflector.go:221] Starting reflector *v1.Node (12h44m59.679485398s) from vendor/k8s.io/client-go/informers/factory.go:134
I0907 09:36:27.246235       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
W0907 09:36:27.267320       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0907 09:36:27.267344       1 controllermanager.go:573] Starting "root-ca-cert-publisher"
I0907 09:36:27.273755       1 controllermanager.go:602] Started "root-ca-cert-publisher"
I0907 09:36:27.274003       1 controllermanager.go:573] Starting "cronjob"
I0907 09:36:27.273971       1 publisher.go:107] Starting root CA certificate configmap publisher
I0907 09:36:27.274305       1 shared_informer.go:255] Waiting for caches to sync for crt configmap
I0907 09:36:27.282435       1 controllermanager.go:602] Started "cronjob"
... skipping 77 lines ...
I0907 09:36:28.349100       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0907 09:36:28.349114       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0907 09:36:28.349134       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 09:36:28.349196       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0907 09:36:28.349229       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0907 09:36:28.349275       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0907 09:36:28.349324       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 09:36:28.349365       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 09:36:28.349454       1 controllermanager.go:602] Started "persistentvolume-binder"
I0907 09:36:28.349500       1 controllermanager.go:573] Starting "pv-protection"
I0907 09:36:28.349600       1 pv_controller_base.go:318] Starting persistent volume controller
I0907 09:36:28.349610       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0907 09:36:28.502467       1 controllermanager.go:602] Started "pv-protection"
... skipping 101 lines ...
I0907 09:36:30.499612       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 09:36:30.499638       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0907 09:36:30.499659       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0907 09:36:30.499693       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0907 09:36:30.499728       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0907 09:36:30.499746       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0907 09:36:30.499768       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 09:36:30.499782       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 09:36:30.499869       1 controllermanager.go:602] Started "attachdetach"
I0907 09:36:30.499913       1 controllermanager.go:573] Starting "pvc-protection"
I0907 09:36:30.500073       1 attach_detach_controller.go:328] Starting attach detach controller
I0907 09:36:30.500088       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0907 09:36:30.648620       1 controllermanager.go:602] Started "pvc-protection"
... skipping 5 lines ...
I0907 09:36:30.798734       1 endpointslice_controller.go:261] Starting endpoint slice controller
I0907 09:36:30.798750       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
I0907 09:36:30.949132       1 controllermanager.go:602] Started "endpointslicemirroring"
I0907 09:36:30.949165       1 controllermanager.go:573] Starting "replicationcontroller"
I0907 09:36:30.949301       1 endpointslicemirroring_controller.go:216] Starting EndpointSliceMirroring controller
I0907 09:36:30.949313       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring
I0907 09:36:31.013776       1 topologycache.go:183] Ignoring node capz-4ay9k6-control-plane-kk9g2 because it is not ready: [{MemoryPressure False 2022-09-07 09:36:03 +0000 UTC 2022-09-07 09:36:03 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 09:36:03 +0000 UTC 2022-09-07 09:36:03 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 09:36:03 +0000 UTC 2022-09-07 09:36:03 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 09:36:03 +0000 UTC 2022-09-07 09:36:03 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0907 09:36:31.014263       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0907 09:36:31.014194       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-control-plane-kk9g2"
W0907 09:36:31.014373       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4ay9k6-control-plane-kk9g2" does not exist
I0907 09:36:31.033420       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-control-plane-kk9g2"
I0907 09:36:31.098060       1 controllermanager.go:602] Started "replicationcontroller"
I0907 09:36:31.098090       1 controllermanager.go:573] Starting "csrsigning"
I0907 09:36:31.098315       1 replica_set.go:205] Starting replicationcontroller controller
I0907 09:36:31.098332       1 shared_informer.go:255] Waiting for caches to sync for ReplicationController
I0907 09:36:31.148141       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
... skipping 334 lines ...
I0907 09:36:32.459875       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-84994b8c4" need=2 creating=2
I0907 09:36:32.460664       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-84994b8c4 to 2"
I0907 09:36:32.472679       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 09:36:32.460297013 +0000 UTC m=+16.833077082 - now: 2022-09-07 09:36:32.472666795 +0000 UTC m=+16.845446764]
I0907 09:36:32.473210       1 daemon_controller.go:228] Adding daemon set kube-proxy
I0907 09:36:32.474581       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0907 09:36:32.480201       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="37.166244ms"
I0907 09:36:32.480232       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:36:32.480260       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 09:36:32.480248361 +0000 UTC m=+16.853028330"
I0907 09:36:32.480841       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 09:36:32 +0000 UTC - now: 2022-09-07 09:36:32.480832451 +0000 UTC m=+16.853612420]
I0907 09:36:32.488311       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="8.049158ms"
I0907 09:36:32.488346       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 09:36:32.488331318 +0000 UTC m=+16.861111287"
I0907 09:36:32.488889       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 09:36:32 +0000 UTC - now: 2022-09-07 09:36:32.488880509 +0000 UTC m=+16.861660578]
I0907 09:36:32.489211       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0907 09:36:32.494937       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="6.594284ms"
I0907 09:36:32.494961       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:36:32.494986       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 09:36:32.494974001 +0000 UTC m=+16.867754070"
I0907 09:36:32.495550       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 09:36:32 +0000 UTC - now: 2022-09-07 09:36:32.495542891 +0000 UTC m=+16.868322860]
I0907 09:36:32.495583       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0907 09:36:32.495599       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="617.589µs"
I0907 09:36:32.500267       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 09:36:32.500249908 +0000 UTC m=+16.873029877"
I0907 09:36:32.500869       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 09:36:32 +0000 UTC - now: 2022-09-07 09:36:32.500863497 +0000 UTC m=+16.873643566]
... skipping 338 lines ...
I0907 09:36:57.171284       1 event.go:294] "Event occurred" object="kube-system/metrics-server-76f7667fbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-76f7667fbf-hd56d"
I0907 09:36:57.204666       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-76f7667fbf" (120.299384ms)
I0907 09:36:57.204955       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-76f7667fbf", timestamp:time.Time{wall:0xc0be376e450896eb, ext:41457229072, loc:(*time.Location)(0x6f10040)}}
I0907 09:36:57.204800       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/metrics-server-76f7667fbf"
I0907 09:36:57.205421       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-76f7667fbf, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0907 09:36:57.210837       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="136.698182ms"
I0907 09:36:57.210873       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:36:57.210922       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-07 09:36:57.210907486 +0000 UTC m=+41.583687555"
I0907 09:36:57.211668       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-07 09:36:57 +0000 UTC - now: 2022-09-07 09:36:57.211661086 +0000 UTC m=+41.584441055]
I0907 09:36:57.211829       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/metrics-server-76f7667fbf-hd56d" podUID=76b9cca3-7985-492b-b6d2-06f7a81f98b5
I0907 09:36:57.211897       1 replica_set.go:457] Pod metrics-server-76f7667fbf-hd56d updated, objectMeta {Name:metrics-server-76f7667fbf-hd56d GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:76b9cca3-7985-492b-b6d2-06f7a81f98b5 ResourceVersion:427 Generation:0 CreationTimestamp:2022-09-07 09:36:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:b0702a49-40ad-450c-918e-5f8491c1cd82 Controller:0xc00221958e BlockOwnerDeletion:0xc00221958f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0702a49-40ad-450c-918e-5f8491c1cd82\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:metrics-server-76f7667fbf-hd56d GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:76b9cca3-7985-492b-b6d2-06f7a81f98b5 ResourceVersion:435 Generation:0 CreationTimestamp:2022-09-07 09:36:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:b0702a49-40ad-450c-918e-5f8491c1cd82 Controller:0xc00225a44e BlockOwnerDeletion:0xc00225a44f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0702a49-40ad-450c-918e-5f8491c1cd82\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0907 09:36:57.211759       1 disruption.go:494] updatePod called on pod "metrics-server-76f7667fbf-hd56d"
I0907 09:36:57.212203       1 disruption.go:570] No PodDisruptionBudgets found for pod metrics-server-76f7667fbf-hd56d, PodDisruptionBudget controller will avoid syncing.
... skipping 43 lines ...
I0907 09:36:58.671822       1 disruption.go:448] add DB "calico-kube-controllers"
I0907 09:36:58.671844       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-755ff8d7b5"
I0907 09:36:58.671896       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (36.163458ms)
I0907 09:36:58.672489       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0be376ea5e57be4, ext:43008576721, loc:(*time.Location)(0x6f10040)}}
I0907 09:36:58.672696       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0907 09:36:58.671902       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="48.125744ms"
I0907 09:36:58.672927       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:36:58.672988       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 09:36:58.672946709 +0000 UTC m=+43.045726678"
I0907 09:36:58.673455       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-07 09:36:58 +0000 UTC - now: 2022-09-07 09:36:58.673448309 +0000 UTC m=+43.046228378]
I0907 09:36:58.676944       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-755ff8d7b5"
I0907 09:36:58.677226       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (4.740894ms)
I0907 09:36:58.677385       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0be376ea5e57be4, ext:43008576721, loc:(*time.Location)(0x6f10040)}}
I0907 09:36:58.677582       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (201.1µs)
... skipping 107 lines ...
I0907 09:37:01.841878       1 reflector.go:257] Listing and watching *v1.PartialObjectMetadata from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0907 09:37:01.841811       1 reflector.go:221] Starting reflector *v1.PartialObjectMetadata (12h32m10.390923385s) from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0907 09:37:01.841917       1 reflector.go:257] Listing and watching *v1.PartialObjectMetadata from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0907 09:37:01.942458       1 shared_informer.go:285] caches populated
I0907 09:37:01.942510       1 shared_informer.go:262] Caches are synced for resource quota
I0907 09:37:01.942520       1 resource_quota_controller.go:462] synced quota controller
W0907 09:37:02.255781       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 09:37:02.255945       1 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0907 09:37:02.255961       1 garbagecollector.go:226] reset restmapper
E0907 09:37:02.259441       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0907 09:37:02.268509       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0907 09:37:02.269371       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=caliconodestatuses", kind "crd.projectcalico.org/v1, Kind=CalicoNodeStatus"
I0907 09:37:02.269420       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=blockaffinities", kind "crd.projectcalico.org/v1, Kind=BlockAffinity"
... skipping 191 lines ...
I0907 09:37:14.217329       1 replica_set.go:457] Pod metrics-server-76f7667fbf-hd56d updated, objectMeta {Name:metrics-server-76f7667fbf-hd56d GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:76b9cca3-7985-492b-b6d2-06f7a81f98b5 ResourceVersion:558 Generation:0 CreationTimestamp:2022-09-07 09:36:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:b0702a49-40ad-450c-918e-5f8491c1cd82 Controller:0xc001e3c30e BlockOwnerDeletion:0xc001e3c30f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0702a49-40ad-450c-918e-5f8491c1cd82\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:metrics-server-76f7667fbf-hd56d GenerateName:metrics-server-76f7667fbf- Namespace:kube-system SelfLink: UID:76b9cca3-7985-492b-b6d2-06f7a81f98b5 ResourceVersion:567 Generation:0 CreationTimestamp:2022-09-07 09:36:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:76f7667fbf] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-76f7667fbf UID:b0702a49-40ad-450c-918e-5f8491c1cd82 Controller:0xc001e80dde BlockOwnerDeletion:0xc001e80ddf}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0702a49-40ad-450c-918e-5f8491c1cd82\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 09:36:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 09:37:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0907 09:37:14.217522       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-76f7667fbf", timestamp:time.Time{wall:0xc0be376e450896eb, ext:41457229072, loc:(*time.Location)(0x6f10040)}}
I0907 09:37:14.217627       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-76f7667fbf" (95.999µs)
I0907 09:37:15.914798       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-4l0fcx" (292.096µs)
I0907 09:37:15.960988       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-f8cv7e" (32.8µs)
I0907 09:37:16.615975       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 09:37:16.697897       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-4ay9k6-control-plane-kk9g2 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 09:36:43 +0000 UTC,LastTransitionTime:2022-09-07 09:36:03 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 09:37:14 +0000 UTC,LastTransitionTime:2022-09-07 09:37:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 09:37:16.697976       1 node_lifecycle_controller.go:1092] Node capz-4ay9k6-control-plane-kk9g2 ReadyCondition updated. Updating timestamp.
I0907 09:37:16.698000       1 node_lifecycle_controller.go:938] Node capz-4ay9k6-control-plane-kk9g2 is healthy again, removing all taints
I0907 09:37:16.698019       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0907 09:37:16.754679       1 pv_controller_base.go:612] resyncing PV controller
I0907 09:37:18.503031       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-control-plane-kk9g2"
I0907 09:37:18.530344       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-control-plane-kk9g2"
... skipping 104 lines ...
I0907 09:37:32.382554       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-76f7667fbf" (1.066851ms)
I0907 09:37:32.392209       1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (10.994897ms)
I0907 09:37:32.407131       1 endpointslicemirroring_controller.go:278] syncEndpoints("kube-system/metrics-server")
I0907 09:37:32.407159       1 endpointslicemirroring_controller.go:313] kube-system/metrics-server Service now has selector, cleaning up any mirrored EndpointSlices
I0907 09:37:32.407314       1 endpointslicemirroring_controller.go:275] Finished syncing EndpointSlices for "kube-system/metrics-server" Endpoints. (186.691µs)
I0907 09:37:32.407544       1 endpoints_controller.go:369] Finished syncing service "kube-system/metrics-server" endpoints. (28.392901ms)
W0907 09:37:33.011952       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 09:37:33.392381       1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (228.29µs)
I0907 09:37:34.498282       1 disruption.go:494] updatePod called on pod "coredns-84994b8c4-fsv6n"
I0907 09:37:34.498376       1 disruption.go:570] No PodDisruptionBudgets found for pod coredns-84994b8c4-fsv6n, PodDisruptionBudget controller will avoid syncing.
I0907 09:37:34.498384       1 disruption.go:497] No matching pdb for pod "coredns-84994b8c4-fsv6n"
I0907 09:37:34.498462       1 replica_set.go:457] Pod coredns-84994b8c4-fsv6n updated, objectMeta {Name:coredns-84994b8c4-fsv6n GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:4081f881-a2d2-4790-8552-0ad9c88cd625 ResourceVersion:615 Generation:0 CreationTimestamp:2022-09-07 09:36:32 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[cni.projectcalico.org/containerID:2d700e66979ce2748ddaf7e9e81338ba06a7b00ae08021ff50c5c762df884912 cni.projectcalico.org/podIP:192.168.143.65/32 cni.projectcalico.org/podIPs:192.168.143.65/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:ba527b75-9441-4860-be07-58412d63eb89 Controller:0xc002218cb0 BlockOwnerDeletion:0xc002218cb1}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:36:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba527b75-9441-4860-be07-58412d63eb89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 09:36:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 09:37:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 09:37:26 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.143.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:coredns-84994b8c4-fsv6n GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:4081f881-a2d2-4790-8552-0ad9c88cd625 ResourceVersion:656 Generation:0 CreationTimestamp:2022-09-07 09:36:32 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[cni.projectcalico.org/containerID:2d700e66979ce2748ddaf7e9e81338ba06a7b00ae08021ff50c5c762df884912 cni.projectcalico.org/podIP:192.168.143.65/32 cni.projectcalico.org/podIPs:192.168.143.65/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:ba527b75-9441-4860-be07-58412d63eb89 Controller:0xc0026088d0 BlockOwnerDeletion:0xc0026088d1}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:36:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba527b75-9441-4860-be07-58412d63eb89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 09:36:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 09:37:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 09:37:34 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.143.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0907 09:37:34.498698       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-84994b8c4", timestamp:time.Time{wall:0xc0be37681b68a889, ext:16832623790, loc:(*time.Location)(0x6f10040)}}
... skipping 169 lines ...
I0907 09:38:23.161608       1 controller.go:753] Finished updateLoadBalancerHosts
I0907 09:38:23.161708       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 09:38:23.161717       1 controller.go:686] It took 0.000151002 seconds to finish syncNodes
I0907 09:38:23.161877       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-4ay9k6-md-0-b8ndn}
I0907 09:38:23.161894       1 taint_manager.go:471] "Updating known taints on node" node="capz-4ay9k6-md-0-b8ndn" taints=[]
I0907 09:38:23.162038       1 topologycache.go:179] Ignoring node capz-4ay9k6-control-plane-kk9g2 because it has an excluded label
I0907 09:38:23.162050       1 topologycache.go:183] Ignoring node capz-4ay9k6-md-0-b8ndn because it is not ready: [{MemoryPressure False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-4ay9k6-md-0-b8ndn" not found]}]
I0907 09:38:23.162200       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0907 09:38:23.162217       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-b8ndn"
W0907 09:38:23.162230       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4ay9k6-md-0-b8ndn" does not exist
I0907 09:38:23.163468       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be3769dcae0791, ext:23853947318, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:23.163646       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be3783c9c0f8d1, ext:127536421622, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:23.163763       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-4ay9k6-md-0-b8ndn], creating 1
I0907 09:38:23.165394       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be37766487b969, ext:73985654570, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:23.165467       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be3783c9dcc679, ext:127538243642, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:23.165480       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-4ay9k6-md-0-b8ndn], creating 1
... skipping 80 lines ...
I0907 09:38:26.220842       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 09:38:26.220852       1 controller.go:753] Finished updateLoadBalancerHosts
I0907 09:38:26.220873       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 09:38:26.220880       1 controller.go:686] It took 5.3e-05 seconds to finish syncNodes
I0907 09:38:26.221502       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be3783cc3498df, ext:127577553568, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:26.221622       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be37848d359e11, ext:130594397750, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:26.222395       1 topologycache.go:183] Ignoring node capz-4ay9k6-md-0-b8ndn because it is not ready: [{MemoryPressure False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 09:38:23 +0000 UTC 2022-09-07 09:38:23 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-4ay9k6-md-0-b8ndn" not found]}]
I0907 09:38:26.222418       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-4ay9k6-md-0-dcnxq], creating 1
I0907 09:38:26.222446       1 topologycache.go:183] Ignoring node capz-4ay9k6-md-0-dcnxq because it is not ready: [{MemoryPressure False 2022-09-07 09:38:26 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 09:38:26 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 09:38:26 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 09:38:26 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-4ay9k6-md-0-dcnxq" not found]}]
I0907 09:38:26.222465       1 topologycache.go:179] Ignoring node capz-4ay9k6-control-plane-kk9g2 because it has an excluded label
I0907 09:38:26.222473       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0907 09:38:26.222487       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-dcnxq"
W0907 09:38:26.222499       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4ay9k6-md-0-dcnxq" does not exist
I0907 09:38:26.224116       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be3783cd28340c, ext:127593518541, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:26.224398       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be37848d5ff34b, ext:130597172080, loc:(*time.Location)(0x6f10040)}}
I0907 09:38:26.224425       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-4ay9k6-md-0-dcnxq], creating 1
I0907 09:38:26.243613       1 disruption.go:479] addPod called on pod "calico-node-mgrkm"
I0907 09:38:26.243800       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-node-mgrkm, PodDisruptionBudget controller will avoid syncing.
I0907 09:38:26.243817       1 disruption.go:482] No matching pdb for pod "calico-node-mgrkm"
... skipping 414 lines ...
I0907 09:38:54.121538       1 controller.go:690] Syncing backends for all LB services.
I0907 09:38:54.121923       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 09:38:54.121939       1 controller.go:753] Finished updateLoadBalancerHosts
I0907 09:38:54.121945       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 09:38:54.121954       1 controller.go:686] It took 0.000416305 seconds to finish syncNodes
I0907 09:38:54.121755       1 topologycache.go:179] Ignoring node capz-4ay9k6-control-plane-kk9g2 because it has an excluded label
I0907 09:38:54.121985       1 topologycache.go:183] Ignoring node capz-4ay9k6-md-0-dcnxq because it is not ready: [{MemoryPressure False 2022-09-07 09:38:46 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-07 09:38:46 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-07 09:38:46 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-07 09:38:46 +0000 UTC 2022-09-07 09:38:26 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0907 09:38:54.122069       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0907 09:38:54.140859       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-4ay9k6-md-0-b8ndn" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0907 09:38:54.143436       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-b8ndn"
I0907 09:38:56.448312       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-4ay9k6-md-0-dcnxq"
I0907 09:38:56.448521       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-dcnxq"
I0907 09:38:56.448677       1 controller.go:690] Syncing backends for all LB services.
... skipping 2 lines ...
I0907 09:38:56.448893       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 09:38:56.448949       1 controller.go:686] It took 0.000272704 seconds to finish syncNodes
I0907 09:38:56.449200       1 topologycache.go:179] Ignoring node capz-4ay9k6-control-plane-kk9g2 because it has an excluded label
I0907 09:38:56.449258       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=4000) CPU, true)
I0907 09:38:56.503479       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-4ay9k6-md-0-dcnxq" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0907 09:38:56.504186       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-dcnxq"
I0907 09:38:56.714094       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-4ay9k6-md-0-b8ndn transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 09:38:43 +0000 UTC,LastTransitionTime:2022-09-07 09:38:23 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 09:38:54 +0000 UTC,LastTransitionTime:2022-09-07 09:38:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 09:38:56.714173       1 node_lifecycle_controller.go:1092] Node capz-4ay9k6-md-0-b8ndn ReadyCondition updated. Updating timestamp.
I0907 09:38:56.721476       1 node_lifecycle_controller.go:938] Node capz-4ay9k6-md-0-b8ndn is healthy again, removing all taints
I0907 09:38:56.721930       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-4ay9k6-md-0-dcnxq transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 09:38:46 +0000 UTC,LastTransitionTime:2022-09-07 09:38:26 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 09:38:56 +0000 UTC,LastTransitionTime:2022-09-07 09:38:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 09:38:56.722341       1 node_lifecycle_controller.go:1092] Node capz-4ay9k6-md-0-dcnxq ReadyCondition updated. Updating timestamp.
I0907 09:38:56.723577       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-4ay9k6-md-0-b8ndn}
I0907 09:38:56.723607       1 taint_manager.go:471] "Updating known taints on node" node="capz-4ay9k6-md-0-b8ndn" taints=[]
I0907 09:38:56.723623       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-4ay9k6-md-0-b8ndn"
I0907 09:38:56.724676       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-b8ndn"
I0907 09:38:56.734617       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-4ay9k6-md-0-dcnxq}
... skipping 161 lines ...
I0907 09:39:01.284645       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azuredisk-controller-6dbf65647f" need=2 creating=2
I0907 09:39:01.285682       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azuredisk-controller-6dbf65647f to 2"
I0907 09:39:01.294849       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-27pdv"
I0907 09:39:01.294886       1 disruption.go:479] addPod called on pod "csi-azuredisk-controller-6dbf65647f-27pdv"
I0907 09:39:01.294920       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-27pdv, PodDisruptionBudget controller will avoid syncing.
I0907 09:39:01.294925       1 disruption.go:482] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-27pdv"
I0907 09:39:01.294973       1 replica_set.go:394] Pod csi-azuredisk-controller-6dbf65647f-27pdv created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-6dbf65647f-27pdv", GenerateName:"csi-azuredisk-controller-6dbf65647f-", Namespace:"kube-system", SelfLink:"", UID:"fdef60d8-cf72-4f0e-908d-e83565a5dd42", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 9, 39, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"6dbf65647f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-6dbf65647f", UID:"b62865fe-a944-4946-9f9b-48210c7e90e2", Controller:(*bool)(0xc0015e2f27), BlockOwnerDeletion:(*bool)(0xc0015e2f28)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 9, 39, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025cbf80), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0025cbf98), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0025cbfb0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-ppddn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0025143c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ppddn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ppddn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ppddn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ppddn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ppddn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002514560)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ppddn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001786e00), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015e3590), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007b08c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015e3610)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015e3640)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0015e3648), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0015e364c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc000abf050), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 09:39:01.295714       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0be378d50f6c871, ext:165657388694, loc:(*time.Location)(0x6f10040)}}
I0907 09:39:01.295763       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-6dbf65647f-27pdv" podUID=fdef60d8-cf72-4f0e-908d-e83565a5dd42
I0907 09:39:01.299332       1 controller_utils.go:581] Controller csi-azuredisk-controller-6dbf65647f created pod csi-azuredisk-controller-6dbf65647f-27pdv
I0907 09:39:01.299872       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6dbf65647f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6dbf65647f-27pdv"
I0907 09:39:01.309971       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-07 09:39:01.285202233 +0000 UTC m=+165.657982302 - now: 2022-09-07 09:39:01.309961539 +0000 UTC m=+165.682741508]
I0907 09:39:01.310090       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azuredisk-controller"
I0907 09:39:01.319378       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.319489       1 disruption.go:479] addPod called on pod "csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.319589       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-nknjj, PodDisruptionBudget controller will avoid syncing.
I0907 09:39:01.319649       1 disruption.go:482] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.319739       1 replica_set.go:394] Pod csi-azuredisk-controller-6dbf65647f-nknjj created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-6dbf65647f-nknjj", GenerateName:"csi-azuredisk-controller-6dbf65647f-", Namespace:"kube-system", SelfLink:"", UID:"4c327efe-8223-4500-961f-db6de0540924", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 9, 39, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"6dbf65647f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-6dbf65647f", UID:"b62865fe-a944-4946-9f9b-48210c7e90e2", Controller:(*bool)(0xc000bb25e7), BlockOwnerDeletion:(*bool)(0xc000bb25e8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 9, 39, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002689560), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc002689578), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002689590), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-rqh6x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001322d20), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rqh6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rqh6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rqh6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rqh6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rqh6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001322e40)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-rqh6x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc000e0d000), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000bb2e70), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000869b20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000bb2f20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000bb2fa0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc000bb2fa8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000bb2fac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0018573f0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0907 09:39:01.320332       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0be378d50f6c871, ext:165657388694, loc:(*time.Location)(0x6f10040)}}
I0907 09:39:01.320407       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-6dbf65647f-nknjj" podUID=4c327efe-8223-4500-961f-db6de0540924
I0907 09:39:01.323303       1 controller_utils.go:581] Controller csi-azuredisk-controller-6dbf65647f created pod csi-azuredisk-controller-6dbf65647f-nknjj
I0907 09:39:01.323437       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azuredisk-controller-6dbf65647f, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0907 09:39:01.323838       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6dbf65647f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.324515       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-27pdv"
... skipping 4 lines ...
I0907 09:39:01.334643       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.334679       1 disruption.go:494] updatePod called on pod "csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.334913       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-nknjj, PodDisruptionBudget controller will avoid syncing.
I0907 09:39:01.334923       1 disruption.go:497] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-nknjj"
I0907 09:39:01.335022       1 replica_set.go:457] Pod csi-azuredisk-controller-6dbf65647f-nknjj updated, objectMeta {Name:csi-azuredisk-controller-6dbf65647f-nknjj GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:4c327efe-8223-4500-961f-db6de0540924 ResourceVersion:963 Generation:0 CreationTimestamp:2022-09-07 09:39:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:b62865fe-a944-4946-9f9b-48210c7e90e2 Controller:0xc000bb25e7 BlockOwnerDeletion:0xc000bb25e8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:39:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b62865fe-a944-4946-9f9b-48210c7e90e2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azuredisk-controller-6dbf65647f-nknjj GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:4c327efe-8223-4500-961f-db6de0540924 ResourceVersion:966 Generation:0 CreationTimestamp:2022-09-07 09:39:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:b62865fe-a944-4946-9f9b-48210c7e90e2 Controller:0xc001f3eb77 BlockOwnerDeletion:0xc001f3eb78}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:39:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b62865fe-a944-4946-9f9b-48210c7e90e2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0907 09:39:01.337136       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azuredisk-controller" duration="79.17638ms"
I0907 09:39:01.337289       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:39:01.337364       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azuredisk-controller" startTime="2022-09-07 09:39:01.337345578 +0000 UTC m=+165.710125647"
I0907 09:39:01.338798       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-07 09:39:01 +0000 UTC - now: 2022-09-07 09:39:01.338789896 +0000 UTC m=+165.711569965]
I0907 09:39:01.349389       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-azuredisk-controller-6dbf65647f"
I0907 09:39:01.352074       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-azuredisk-controller-6dbf65647f" (67.533336ms)
I0907 09:39:01.352134       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0be378d50f6c871, ext:165657388694, loc:(*time.Location)(0x6f10040)}}
I0907 09:39:01.352258       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azuredisk-controller-6dbf65647f, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 110 lines ...
I0907 09:39:04.569747       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-snapshot-controller-84ccd6c756-6fh47"
I0907 09:39:04.569776       1 disruption.go:494] updatePod called on pod "csi-snapshot-controller-84ccd6c756-6fh47"
I0907 09:39:04.569805       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-snapshot-controller-84ccd6c756-6fh47, PodDisruptionBudget controller will avoid syncing.
I0907 09:39:04.569810       1 disruption.go:497] No matching pdb for pod "csi-snapshot-controller-84ccd6c756-6fh47"
I0907 09:39:04.569854       1 replica_set.go:457] Pod csi-snapshot-controller-84ccd6c756-6fh47 updated, objectMeta {Name:csi-snapshot-controller-84ccd6c756-6fh47 GenerateName:csi-snapshot-controller-84ccd6c756- Namespace:kube-system SelfLink: UID:b3072c92-a4fb-4004-9b62-839201d78eec ResourceVersion:1018 Generation:0 CreationTimestamp:2022-09-07 09:39:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:84ccd6c756] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-84ccd6c756 UID:66220013-7a61-4ffe-a406-99a8d1fd80a7 Controller:0xc0020ff8b7 BlockOwnerDeletion:0xc0020ff8b8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:39:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"66220013-7a61-4ffe-a406-99a8d1fd80a7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-84ccd6c756-6fh47 GenerateName:csi-snapshot-controller-84ccd6c756- Namespace:kube-system SelfLink: UID:b3072c92-a4fb-4004-9b62-839201d78eec ResourceVersion:1019 Generation:0 CreationTimestamp:2022-09-07 09:39:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:84ccd6c756] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-84ccd6c756 UID:66220013-7a61-4ffe-a406-99a8d1fd80a7 Controller:0xc0023c9f57 BlockOwnerDeletion:0xc0023c9f58}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:39:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"66220013-7a61-4ffe-a406-99a8d1fd80a7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]}.
I0907 09:39:04.570266       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="63.963589ms"
I0907 09:39:04.570290       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:39:04.570336       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-07 09:39:04.570324098 +0000 UTC m=+168.943104067"
I0907 09:39:04.570743       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-07 09:39:04 +0000 UTC - now: 2022-09-07 09:39:04.570736003 +0000 UTC m=+168.943515972]
I0907 09:39:04.586721       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
I0907 09:39:04.586865       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="16.531304ms"
I0907 09:39:04.586892       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-07 09:39:04.586878502 +0000 UTC m=+168.959658571"
I0907 09:39:04.587259       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-07 09:39:04 +0000 UTC - now: 2022-09-07 09:39:04.587251507 +0000 UTC m=+168.960031476]
... skipping 537 lines ...
I0907 09:42:05.003285       1 pv_protection_controller.go:121] Processing PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0
I0907 09:42:05.007823       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0" with version 1621
I0907 09:42:05.007861       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0]: phase: Released, bound to: "azuredisk-8081/pvc-zc4pz (uid: ce11694d-09a7-45a6-9c01-a40a7aa726a0)", boundByController: false
I0907 09:42:05.008054       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0]: volume is bound to claim azuredisk-8081/pvc-zc4pz
I0907 09:42:05.008072       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0]: claim azuredisk-8081/pvc-zc4pz not found
I0907 09:42:05.008010       1 pv_protection_controller.go:198] Got event on PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0
I0907 09:42:05.015758       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0: Operation cannot be fulfilled on persistentvolumes "pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0": the object has been modified; please apply your changes to the latest version and try again
I0907 09:42:05.015776       1 pv_protection_controller.go:124] Finished processing PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0 (12.43545ms)
E0907 09:42:05.015787       1 pv_protection_controller.go:114] PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0": the object has been modified; please apply your changes to the latest version and try again
I0907 09:42:05.015967       1 pv_protection_controller.go:121] Processing PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0
I0907 09:42:05.019491       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0
I0907 09:42:05.019666       1 pv_protection_controller.go:124] Finished processing PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0 (3.682144ms)
I0907 09:42:05.020232       1 pv_controller_base.go:238] volume "pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0" deleted
I0907 09:42:05.020337       1 pv_controller_base.go:589] deletion of claim "azuredisk-8081/pvc-zc4pz" was already processed
I0907 09:42:05.021510       1 pv_protection_controller.go:121] Processing PV pvc-ce11694d-09a7-45a6-9c01-a40a7aa726a0
... skipping 542 lines ...
I0907 09:44:49.198120       1 pv_protection_controller.go:121] Processing PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062
I0907 09:44:49.210128       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062" with version 2103
I0907 09:44:49.210162       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062]: phase: Released, bound to: "azuredisk-5466/pvc-q88vh (uid: 1c0b0153-abd6-49b7-87f8-a7b5cdee4062)", boundByController: false
I0907 09:44:49.210186       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062]: volume is bound to claim azuredisk-5466/pvc-q88vh
I0907 09:44:49.210196       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062]: claim azuredisk-5466/pvc-q88vh not found
I0907 09:44:49.210209       1 pv_protection_controller.go:198] Got event on PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062
I0907 09:44:49.212261       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062: Operation cannot be fulfilled on persistentvolumes "pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062": the object has been modified; please apply your changes to the latest version and try again
I0907 09:44:49.212291       1 pv_protection_controller.go:124] Finished processing PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062 (14.152571ms)
E0907 09:44:49.212305       1 pv_protection_controller.go:114] PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062": the object has been modified; please apply your changes to the latest version and try again
I0907 09:44:49.212325       1 pv_protection_controller.go:121] Processing PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062
I0907 09:44:49.218486       1 pv_controller_base.go:238] volume "pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062" deleted
I0907 09:44:49.218525       1 pv_controller_base.go:589] deletion of claim "azuredisk-5466/pvc-q88vh" was already processed
I0907 09:44:49.218699       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062
I0907 09:44:49.218710       1 pv_protection_controller.go:124] Finished processing PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062 (6.374977ms)
I0907 09:44:49.218741       1 pv_protection_controller.go:121] Processing PV pvc-1c0b0153-abd6-49b7-87f8-a7b5cdee4062
... skipping 305 lines ...
I0907 09:45:38.962160       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-76efa53c-a381-41a8-ba80-a1418523d4aa]: claim azuredisk-2888/pvc-js8k9 not found
I0907 09:45:38.966159       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-76efa53c-a381-41a8-ba80-a1418523d4aa" with version 2296
I0907 09:45:38.966386       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-76efa53c-a381-41a8-ba80-a1418523d4aa]: phase: Released, bound to: "azuredisk-2888/pvc-js8k9 (uid: 76efa53c-a381-41a8-ba80-a1418523d4aa)", boundByController: false
I0907 09:45:38.966539       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-76efa53c-a381-41a8-ba80-a1418523d4aa]: volume is bound to claim azuredisk-2888/pvc-js8k9
I0907 09:45:38.966556       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-76efa53c-a381-41a8-ba80-a1418523d4aa]: claim azuredisk-2888/pvc-js8k9 not found
I0907 09:45:38.966700       1 pv_protection_controller.go:198] Got event on PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa
I0907 09:45:38.969086       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa: Operation cannot be fulfilled on persistentvolumes "pvc-76efa53c-a381-41a8-ba80-a1418523d4aa": the object has been modified; please apply your changes to the latest version and try again
I0907 09:45:38.969104       1 pv_protection_controller.go:124] Finished processing PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa (7.194385ms)
E0907 09:45:38.969117       1 pv_protection_controller.go:114] PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa failed with : Operation cannot be fulfilled on persistentvolumes "pvc-76efa53c-a381-41a8-ba80-a1418523d4aa": the object has been modified; please apply your changes to the latest version and try again
I0907 09:45:38.969140       1 pv_protection_controller.go:121] Processing PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa
I0907 09:45:38.972545       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa
I0907 09:45:38.972570       1 pv_protection_controller.go:124] Finished processing PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa (3.42094ms)
I0907 09:45:38.973404       1 pv_controller_base.go:238] volume "pvc-76efa53c-a381-41a8-ba80-a1418523d4aa" deleted
I0907 09:45:38.973575       1 pv_controller_base.go:589] deletion of claim "azuredisk-2888/pvc-js8k9" was already processed
I0907 09:45:38.974622       1 pv_protection_controller.go:121] Processing PV pvc-76efa53c-a381-41a8-ba80-a1418523d4aa
... skipping 1851 lines ...
I0907 09:50:34.047820       1 pv_controller.go:1535] provisionClaim[azuredisk-6159/pvc-mkv2r]: started
I0907 09:50:34.047839       1 pv_controller.go:1851] scheduleOperation[provision-azuredisk-6159/pvc-mkv2r[645675ba-baac-44a5-8f71-9d9e82a49755]]
I0907 09:50:34.047864       1 pv_controller.go:1788] provisionClaimOperationExternal [azuredisk-6159/pvc-mkv2r] started, class: "azuredisk-6159-kubernetes.io-azure-disk-dynamic-sc-448v7"
I0907 09:50:34.048145       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azuredisk-6159/pvc-mkv2r"
I0907 09:50:34.049358       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azuredisk-6159/azuredisk-volume-tester-p8nff-649557d8fb"
I0907 09:50:34.051206       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azuredisk-6159/azuredisk-volume-tester-p8nff" duration="29.052552ms"
I0907 09:50:34.051388       1 deployment_controller.go:497] "Error syncing deployment" deployment="azuredisk-6159/azuredisk-volume-tester-p8nff" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-p8nff\": the object has been modified; please apply your changes to the latest version and try again"
I0907 09:50:34.051723       1 deployment_controller.go:583] "Started syncing deployment" deployment="azuredisk-6159/azuredisk-volume-tester-p8nff" startTime="2022-09-07 09:50:34.051706968 +0000 UTC m=+858.424486937"
I0907 09:50:34.052308       1 pv_controller_base.go:726] storeObjectUpdate updating claim "azuredisk-6159/pvc-mkv2r" with version 3285
I0907 09:50:34.052610       1 pv_controller.go:255] synchronizing PersistentVolumeClaim[azuredisk-6159/pvc-mkv2r]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0907 09:50:34.052758       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-6159/pvc-mkv2r]: no volume found
I0907 09:50:34.052907       1 pv_controller.go:1535] provisionClaim[azuredisk-6159/pvc-mkv2r]: started
I0907 09:50:34.053077       1 pv_controller.go:1851] scheduleOperation[provision-azuredisk-6159/pvc-mkv2r[645675ba-baac-44a5-8f71-9d9e82a49755]]
... skipping 211 lines ...
I0907 09:50:56.986589       1 disruption.go:570] No PodDisruptionBudgets found for pod azuredisk-volume-tester-p8nff-649557d8fb-7qvkj, PodDisruptionBudget controller will avoid syncing.
I0907 09:50:56.986596       1 disruption.go:497] No matching pdb for pod "azuredisk-volume-tester-p8nff-649557d8fb-7qvkj"
I0907 09:50:56.986639       1 replica_set.go:457] Pod azuredisk-volume-tester-p8nff-649557d8fb-7qvkj updated, objectMeta {Name:azuredisk-volume-tester-p8nff-649557d8fb-7qvkj GenerateName:azuredisk-volume-tester-p8nff-649557d8fb- Namespace:azuredisk-6159 SelfLink: UID:d464cc52-5b49-4d8a-8615-c014c6e4f755 ResourceVersion:3407 Generation:0 CreationTimestamp:2022-09-07 09:50:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-685213522303989579 pod-template-hash:649557d8fb] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-p8nff-649557d8fb UID:5414577a-e766-40b5-af16-171a7f73d0b9 Controller:0xc002168367 BlockOwnerDeletion:0xc002168368}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:50:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5414577a-e766-40b5-af16-171a7f73d0b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-p8nff-649557d8fb-7qvkj GenerateName:azuredisk-volume-tester-p8nff-649557d8fb- Namespace:azuredisk-6159 SelfLink: UID:d464cc52-5b49-4d8a-8615-c014c6e4f755 ResourceVersion:3415 Generation:0 CreationTimestamp:2022-09-07 09:50:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-685213522303989579 pod-template-hash:649557d8fb] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-p8nff-649557d8fb UID:5414577a-e766-40b5-af16-171a7f73d0b9 Controller:0xc002169a9e BlockOwnerDeletion:0xc002169a9f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 09:50:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5414577a-e766-40b5-af16-171a7f73d0b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 09:50:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0907 09:50:56.986781       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-6159/azuredisk-volume-tester-p8nff-649557d8fb", timestamp:time.Time{wall:0xc0be384038943d5b, ext:881322019200, loc:(*time.Location)(0x6f10040)}}
I0907 09:50:56.986813       1 controller_utils.go:938] Ignoring inactive pod azuredisk-6159/azuredisk-volume-tester-p8nff-649557d8fb-h7prg in state Running, deletion time 2022-09-07 09:51:26 +0000 UTC
I0907 09:50:56.986843       1 replica_set.go:667] Finished syncing ReplicaSet "azuredisk-6159/azuredisk-volume-tester-p8nff-649557d8fb" (66.301µs)
I0907 09:50:56.998446       1 reconciler.go:420] "Multi-Attach error: volume is already used by pods" pods=[azuredisk-6159/azuredisk-volume-tester-p8nff-649557d8fb-h7prg] attachedTo=[capz-4ay9k6-md-0-b8ndn] volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4ay9k6/providers/Microsoft.Compute/disks/pvc-645675ba-baac-44a5-8f71-9d9e82a49755 VolumeSpec:0xc002359908 NodeName:capz-4ay9k6-md-0-dcnxq ScheduledPods:[&Pod{ObjectMeta:{azuredisk-volume-tester-p8nff-649557d8fb-7qvkj azuredisk-volume-tester-p8nff-649557d8fb- azuredisk-6159  d464cc52-5b49-4d8a-8615-c014c6e4f755 3407 0 2022-09-07 09:50:56 +0000 UTC <nil> <nil> map[app:azuredisk-volume-tester-685213522303989579 pod-template-hash:649557d8fb] map[] [{apps/v1 ReplicaSet azuredisk-volume-tester-p8nff-649557d8fb 5414577a-e766-40b5-af16-171a7f73d0b9 0xc002168367 0xc002168368}] [] [{kube-controller-manager Update v1 2022-09-07 09:50:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5414577a-e766-40b5-af16-171a7f73d0b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:test-volume-1,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:pvc-mkv2r,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-qbrd8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:volume-tester,Image:k8s.gcr.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh],Args:[-c echo 'hello world' >> /mnt/test-1/data && while true; do sleep 3600; done],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume-1,ReadOnly:false,MountPath:/mnt/test-1,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qbrd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{kubernetes.io/os: linux,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-4ay9k6-md-0-dcnxq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 09:50:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}
I0907 09:50:56.998507       1 event.go:294] "Event occurred" object="azuredisk-6159/azuredisk-volume-tester-p8nff-649557d8fb-7qvkj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-645675ba-baac-44a5-8f71-9d9e82a49755\" Volume is already used by pod(s) azuredisk-volume-tester-p8nff-649557d8fb-h7prg"
I0907 09:50:59.721561       1 reflector.go:559] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 8 items received
I0907 09:51:01.008218       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4ay9k6-md-0-dcnxq"
I0907 09:51:01.581842       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="95.501µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:44808" resp=200
I0907 09:51:01.653005       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 09:51:01.697188       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 09:51:01.793124       1 pv_controller_base.go:612] resyncing PV controller
... skipping 1393 lines ...
I0907 09:54:30.119636       1 pv_protection_controller.go:121] Processing PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed
I0907 09:54:30.125125       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed" with version 4161
I0907 09:54:30.125252       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed]: phase: Released, bound to: "azuredisk-9241/pvc-t2tnm (uid: 8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed)", boundByController: false
I0907 09:54:30.125415       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed]: volume is bound to claim azuredisk-9241/pvc-t2tnm
I0907 09:54:30.125505       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed]: claim azuredisk-9241/pvc-t2tnm not found
I0907 09:54:30.125360       1 pv_protection_controller.go:198] Got event on PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed
I0907 09:54:30.128803       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed: Operation cannot be fulfilled on persistentvolumes "pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed": the object has been modified; please apply your changes to the latest version and try again
I0907 09:54:30.128821       1 pv_protection_controller.go:124] Finished processing PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed (9.100611ms)
E0907 09:54:30.128832       1 pv_protection_controller.go:114] PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed failed with : Operation cannot be fulfilled on persistentvolumes "pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed": the object has been modified; please apply your changes to the latest version and try again
I0907 09:54:30.128883       1 pv_protection_controller.go:121] Processing PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed
I0907 09:54:30.132382       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed
I0907 09:54:30.132409       1 pv_protection_controller.go:124] Finished processing PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed (3.518043ms)
I0907 09:54:30.132382       1 pv_controller_base.go:238] volume "pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed" deleted
I0907 09:54:30.132462       1 pv_controller_base.go:589] deletion of claim "azuredisk-9241/pvc-t2tnm" was already processed
I0907 09:54:30.134528       1 pv_protection_controller.go:121] Processing PV pvc-8ce4ac54-4721-4dcc-8dfd-343f0cfb9aed
... skipping 702 lines ...
I0907 09:55:48.521566       1 pv_protection_controller.go:121] Processing PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e
I0907 09:55:48.532351       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e" with version 4475
I0907 09:55:48.535148       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e]: phase: Released, bound to: "azuredisk-9336/pvc-knt7x (uid: da65fc2c-ae82-49e8-b8a2-1c59530a284e)", boundByController: false
I0907 09:55:48.535219       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e]: volume is bound to claim azuredisk-9336/pvc-knt7x
I0907 09:55:48.535257       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e]: claim azuredisk-9336/pvc-knt7x not found
I0907 09:55:48.535328       1 pv_protection_controller.go:198] Got event on PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e
I0907 09:55:48.535547       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e: Operation cannot be fulfilled on persistentvolumes "pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e": the object has been modified; please apply your changes to the latest version and try again
I0907 09:55:48.535562       1 pv_protection_controller.go:124] Finished processing PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e (13.987468ms)
E0907 09:55:48.535592       1 pv_protection_controller.go:114] PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e failed with : Operation cannot be fulfilled on persistentvolumes "pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e": the object has been modified; please apply your changes to the latest version and try again
I0907 09:55:48.535624       1 pv_protection_controller.go:121] Processing PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e
I0907 09:55:48.555689       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e
I0907 09:55:48.555895       1 pv_protection_controller.go:124] Finished processing PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e (20.255942ms)
I0907 09:55:48.556070       1 pv_protection_controller.go:121] Processing PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e
I0907 09:55:48.556380       1 pv_protection_controller.go:129] PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e not found, ignoring
I0907 09:55:48.556523       1 pv_protection_controller.go:124] Finished processing PV pvc-da65fc2c-ae82-49e8-b8a2-1c59530a284e (152.202µs)
... skipping 50 lines ...
I0907 09:55:52.929436       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b]: claim azuredisk-9336/pvc-tpwxm not found
I0907 09:55:52.935252       1 pv_protection_controller.go:198] Got event on PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b
I0907 09:55:52.935250       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b" with version 4493
I0907 09:55:52.935505       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b]: phase: Released, bound to: "azuredisk-9336/pvc-tpwxm (uid: 9bf628b5-5340-45c5-9971-b20fc627ec6b)", boundByController: false
I0907 09:55:52.935534       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b]: volume is bound to claim azuredisk-9336/pvc-tpwxm
I0907 09:55:52.935544       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b]: claim azuredisk-9336/pvc-tpwxm not found
I0907 09:55:52.938432       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b: Operation cannot be fulfilled on persistentvolumes "pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b": the object has been modified; please apply your changes to the latest version and try again
I0907 09:55:52.938451       1 pv_protection_controller.go:124] Finished processing PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b (9.269911ms)
E0907 09:55:52.938464       1 pv_protection_controller.go:114] PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b failed with : Operation cannot be fulfilled on persistentvolumes "pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b": the object has been modified; please apply your changes to the latest version and try again
I0907 09:55:52.938490       1 pv_protection_controller.go:121] Processing PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b
I0907 09:55:52.942421       1 pv_controller_base.go:238] volume "pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b" deleted
I0907 09:55:52.942635       1 pv_controller_base.go:589] deletion of claim "azuredisk-9336/pvc-tpwxm" was already processed
I0907 09:55:52.943299       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b
I0907 09:55:52.943342       1 pv_protection_controller.go:124] Finished processing PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b (4.840158ms)
I0907 09:55:52.944376       1 pv_protection_controller.go:121] Processing PV pvc-9bf628b5-5340-45c5-9971-b20fc627ec6b
... skipping 1514 lines ...
I0907 09:59:16.752466       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-33edae87-a766-4ede-85a1-489b64f0443c]: claim azuredisk-5786/pvc-8mr94 not found
I0907 09:59:16.752482       1 pv_protection_controller.go:198] Got event on PV pvc-33edae87-a766-4ede-85a1-489b64f0443c
I0907 09:59:16.752502       1 pv_protection_controller.go:121] Processing PV pvc-33edae87-a766-4ede-85a1-489b64f0443c
I0907 09:59:16.760782       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-33edae87-a766-4ede-85a1-489b64f0443c" with version 5254
I0907 09:59:16.760995       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-33edae87-a766-4ede-85a1-489b64f0443c]: phase: Released, bound to: "azuredisk-5786/pvc-8mr94 (uid: 33edae87-a766-4ede-85a1-489b64f0443c)", boundByController: false
I0907 09:59:16.761026       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-33edae87-a766-4ede-85a1-489b64f0443c]: volume is bound to claim azuredisk-5786/pvc-8mr94
I0907 09:59:16.760808       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-33edae87-a766-4ede-85a1-489b64f0443c: Operation cannot be fulfilled on persistentvolumes "pvc-33edae87-a766-4ede-85a1-489b64f0443c": the object has been modified; please apply your changes to the latest version and try again
I0907 09:59:16.761163       1 pv_protection_controller.go:124] Finished processing PV pvc-33edae87-a766-4ede-85a1-489b64f0443c (8.605604ms)
E0907 09:59:16.761186       1 pv_protection_controller.go:114] PV pvc-33edae87-a766-4ede-85a1-489b64f0443c failed with : Operation cannot be fulfilled on persistentvolumes "pvc-33edae87-a766-4ede-85a1-489b64f0443c": the object has been modified; please apply your changes to the latest version and try again
I0907 09:59:16.760862       1 pv_protection_controller.go:198] Got event on PV pvc-33edae87-a766-4ede-85a1-489b64f0443c
I0907 09:59:16.761304       1 pv_protection_controller.go:121] Processing PV pvc-33edae87-a766-4ede-85a1-489b64f0443c
I0907 09:59:16.761087       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-33edae87-a766-4ede-85a1-489b64f0443c]: claim azuredisk-5786/pvc-8mr94 not found
I0907 09:59:16.768852       1 pv_controller_base.go:238] volume "pvc-33edae87-a766-4ede-85a1-489b64f0443c" deleted
I0907 09:59:16.769052       1 pv_controller_base.go:589] deletion of claim "azuredisk-5786/pvc-8mr94" was already processed
I0907 09:59:16.769715       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-33edae87-a766-4ede-85a1-489b64f0443c
... skipping 609 lines ...
I0907 10:00:48.556384       1 namespace_controller.go:157] Content remaining in namespace azuredisk-2305, waiting 16 seconds
2022/09/07 10:00:49 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1181.297 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 45 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-vkbmd, container manager
STEP: Dumping workload cluster default/capz-4ay9k6 logs
Sep  7 10:02:27.807: INFO: Collecting logs for Linux node capz-4ay9k6-control-plane-kk9g2 in cluster capz-4ay9k6 in namespace default

Sep  7 10:03:27.808: INFO: Collecting boot logs for AzureMachine capz-4ay9k6-control-plane-kk9g2

Failed to get logs for machine capz-4ay9k6-control-plane-sg6wm, cluster default/capz-4ay9k6: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 10:03:28.708: INFO: Collecting logs for Linux node capz-4ay9k6-md-0-b8ndn in cluster capz-4ay9k6 in namespace default

Sep  7 10:04:28.710: INFO: Collecting boot logs for AzureMachine capz-4ay9k6-md-0-b8ndn

Failed to get logs for machine capz-4ay9k6-md-0-587b4d9476-gxnmm, cluster default/capz-4ay9k6: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 10:04:29.042: INFO: Collecting logs for Linux node capz-4ay9k6-md-0-dcnxq in cluster capz-4ay9k6 in namespace default

Sep  7 10:05:29.045: INFO: Collecting boot logs for AzureMachine capz-4ay9k6-md-0-dcnxq

Failed to get logs for machine capz-4ay9k6-md-0-587b4d9476-w4jjt, cluster default/capz-4ay9k6: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-4ay9k6 kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-g9s22, container calico-node
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-27pdv, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-27pdv, container csi-snapshotter
STEP: Collecting events for Pod kube-system/metrics-server-76f7667fbf-hd56d
STEP: Collecting events for Pod kube-system/csi-azuredisk-node-2zzbc
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-6fh47, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-84ccd6c756-6fh47
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-frgtr, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-84ccd6c756-frgtr
STEP: Creating log watcher for controller kube-system/etcd-capz-4ay9k6-control-plane-kk9g2, container etcd
STEP: Collecting events for Pod kube-system/etcd-capz-4ay9k6-control-plane-kk9g2
STEP: failed to find events of Pod "etcd-capz-4ay9k6-control-plane-kk9g2"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-4ay9k6-control-plane-kk9g2, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-4ay9k6-control-plane-kk9g2
STEP: failed to find events of Pod "kube-apiserver-capz-4ay9k6-control-plane-kk9g2"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-4ay9k6-control-plane-kk9g2, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-4ay9k6-control-plane-kk9g2
STEP: Creating log watcher for controller kube-system/kube-proxy-4r4xq, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-capz-4ay9k6-control-plane-kk9g2"
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-kr5rt, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/kube-proxy-4r4xq
STEP: Creating log watcher for controller kube-system/kube-proxy-6gjk6, container kube-proxy
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-fsv6n
STEP: Collecting events for Pod kube-system/kube-proxy-6gjk6
STEP: Creating log watcher for controller kube-system/kube-proxy-g6qvq, container kube-proxy
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-2zzbc, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-nknjj, container csi-provisioner
STEP: Fetching kube-system pod logs took 418.510374ms
STEP: Dumping workload cluster default/capz-4ay9k6 Azure activity log
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-nknjj, container csi-attacher
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-27pdv, container csi-resizer
STEP: failed to find events of Pod "kube-scheduler-capz-4ay9k6-control-plane-kk9g2"
STEP: Fetching activity logs took 3.756851352s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-4ay9k6" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...