This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-02 23:43
Elapsed48m25s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 700 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 134 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-x56xig-kubeconfig; do sleep 1; done"
capz-x56xig-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-x56xig-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-x56xig-control-plane-k4sjk   NotReady   control-plane   12s   v1.26.0-alpha.0.370+bacd6029b3bac1
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-x56xig-control-plane-k4sjk condition met
node/capz-x56xig-mp-0000000 condition met
... skipping 62 lines ...
Pre-Provisioned [single-az] 
  should use a pre-provisioned volume and mount it as readOnly in a pod [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:70
STEP: Creating a kubernetes client
Sep  3 00:00:57.781: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
Sep  3 00:00:58.238: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
I0903 00:00:58.477238   35502 azuredisk_driver.go:56] Using azure disk driver: kubernetes.io/azure-disk
Sep  3 00:00:58.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.

... skipping 55 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  3 00:00:59.838: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.561 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 00:01:03.023: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qh9n6" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Sep  3 00:01:03.082: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 58.991473ms
Sep  3 00:01:05.144: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120276899s
Sep  3 00:01:07.204: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181019603s
Sep  3 00:01:09.265: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241118856s
Sep  3 00:01:11.326: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302454288s
Sep  3 00:01:13.386: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.362539494s
Sep  3 00:01:15.446: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.422793258s
Sep  3 00:01:17.506: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.482983587s
Sep  3 00:01:19.568: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.544678921s
Sep  3 00:01:21.632: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.608616032s
Sep  3 00:01:23.696: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.67210735s
Sep  3 00:01:25.759: INFO: Pod "azuredisk-volume-tester-qh9n6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.735350517s
STEP: Saw pod success
Sep  3 00:01:25.759: INFO: Pod "azuredisk-volume-tester-qh9n6" satisfied condition "Succeeded or Failed"
Sep  3 00:01:25.759: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-qh9n6"
Sep  3 00:01:25.837: INFO: Pod azuredisk-volume-tester-qh9n6 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-qh9n6 in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  3 00:02:16.708: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-nhbpr"
Sep  3 00:02:16.770: INFO: Error getting logs for pod azuredisk-volume-tester-nhbpr: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-nhbpr)
STEP: Deleting pod azuredisk-volume-tester-nhbpr in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 00:02:16.951: INFO: deleting PVC "azuredisk-1563"/"pvc-2862x"
Sep  3 00:02:16.951: INFO: Deleting PersistentVolumeClaim "pvc-2862x"
STEP: waiting for claim's PV "pvc-aa219001-3934-40f0-aa20-653cd0658f9f" to be deleted
... skipping 59 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 00:04:50.171: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-f8x6q" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Sep  3 00:04:50.230: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 59.554373ms
Sep  3 00:04:52.291: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120498959s
Sep  3 00:04:54.351: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180116442s
Sep  3 00:04:56.412: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240964148s
Sep  3 00:04:58.473: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302054697s
Sep  3 00:05:00.534: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36274074s
Sep  3 00:05:02.595: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 12.424165816s
Sep  3 00:05:04.656: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Pending", Reason="", readiness=false. Elapsed: 14.485533417s
Sep  3 00:05:06.719: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Running", Reason="", readiness=true. Elapsed: 16.548480494s
Sep  3 00:05:08.784: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Running", Reason="", readiness=false. Elapsed: 18.61286545s
Sep  3 00:05:10.848: INFO: Pod "azuredisk-volume-tester-f8x6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.676902738s
STEP: Saw pod success
Sep  3 00:05:10.848: INFO: Pod "azuredisk-volume-tester-f8x6q" satisfied condition "Succeeded or Failed"
Sep  3 00:05:10.848: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-f8x6q"
Sep  3 00:05:10.921: INFO: Pod azuredisk-volume-tester-f8x6q has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-f8x6q in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
... skipping 39 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  3 00:05:47.885: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6749h" in namespace "azuredisk-9241" to be "Error status code"
Sep  3 00:05:47.945: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 59.672491ms
Sep  3 00:05:50.006: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120632147s
Sep  3 00:05:52.066: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180855228s
Sep  3 00:05:54.127: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241793611s
Sep  3 00:05:56.188: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.303039479s
Sep  3 00:05:58.249: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364427138s
Sep  3 00:06:00.310: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.425473144s
Sep  3 00:06:02.374: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Running", Reason="", readiness=true. Elapsed: 14.489418184s
Sep  3 00:06:04.438: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Running", Reason="", readiness=false. Elapsed: 16.553069888s
Sep  3 00:06:06.502: INFO: Pod "azuredisk-volume-tester-6749h": Phase="Failed", Reason="", readiness=false. Elapsed: 18.617018033s
STEP: Saw pod failure
Sep  3 00:06:06.502: INFO: Pod "azuredisk-volume-tester-6749h" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  3 00:06:06.573: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-6749h"
Sep  3 00:06:06.636: INFO: Pod azuredisk-volume-tester-6749h has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-6749h in namespace azuredisk-9241
STEP: validating provisioned PV
... skipping 375 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 00:13:23.149: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-s2tl4" in namespace "azuredisk-5710" to be "Succeeded or Failed"
Sep  3 00:13:23.209: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 60.029917ms
Sep  3 00:13:25.269: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120605929s
Sep  3 00:13:27.333: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184382407s
Sep  3 00:13:29.399: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250232196s
Sep  3 00:13:31.463: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314088482s
Sep  3 00:13:33.526: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377685036s
Sep  3 00:13:35.591: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.441765396s
Sep  3 00:13:37.655: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.506540418s
Sep  3 00:13:39.721: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.572642009s
Sep  3 00:13:41.785: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.635890566s
Sep  3 00:13:43.851: INFO: Pod "azuredisk-volume-tester-s2tl4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.701884607s
STEP: Saw pod success
Sep  3 00:13:43.851: INFO: Pod "azuredisk-volume-tester-s2tl4" satisfied condition "Succeeded or Failed"
Sep  3 00:13:43.851: INFO: deleting Pod "azuredisk-5710"/"azuredisk-volume-tester-s2tl4"
Sep  3 00:13:43.932: INFO: Pod azuredisk-volume-tester-s2tl4 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-s2tl4 in namespace azuredisk-5710
... skipping 71 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 00:14:47.588: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-cntcc" in namespace "azuredisk-1224" to be "Succeeded or Failed"
Sep  3 00:14:47.654: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 65.633471ms
Sep  3 00:14:49.721: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133228979s
Sep  3 00:14:51.783: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194766996s
Sep  3 00:14:53.844: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256077802s
Sep  3 00:14:55.907: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318697435s
Sep  3 00:14:57.968: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.380182436s
Sep  3 00:15:00.030: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.441572264s
Sep  3 00:15:02.091: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.503198972s
Sep  3 00:15:04.153: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.564468213s
Sep  3 00:15:06.217: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.629300445s
Sep  3 00:15:08.281: INFO: Pod "azuredisk-volume-tester-cntcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.693180414s
STEP: Saw pod success
Sep  3 00:15:08.281: INFO: Pod "azuredisk-volume-tester-cntcc" satisfied condition "Succeeded or Failed"
Sep  3 00:15:08.281: INFO: deleting Pod "azuredisk-1224"/"azuredisk-volume-tester-cntcc"
Sep  3 00:15:08.344: INFO: Pod azuredisk-volume-tester-cntcc has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.049186 seconds, 2.0GB/s
hello world

... skipping 122 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 00:16:23.187: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-sfrv6" in namespace "azuredisk-3231" to be "Succeeded or Failed"
Sep  3 00:16:23.246: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 59.414579ms
Sep  3 00:16:25.307: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12045231s
Sep  3 00:16:27.371: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184606827s
Sep  3 00:16:29.436: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249119711s
Sep  3 00:16:31.500: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313298455s
Sep  3 00:16:33.563: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.37621153s
... skipping 3 lines ...
Sep  3 00:16:41.819: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.632598239s
Sep  3 00:16:43.882: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.695459687s
Sep  3 00:16:45.946: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.759166889s
Sep  3 00:16:48.010: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.8228484s
Sep  3 00:16:50.074: INFO: Pod "azuredisk-volume-tester-sfrv6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.886829841s
STEP: Saw pod success
Sep  3 00:16:50.074: INFO: Pod "azuredisk-volume-tester-sfrv6" satisfied condition "Succeeded or Failed"
Sep  3 00:16:50.074: INFO: deleting Pod "azuredisk-3231"/"azuredisk-volume-tester-sfrv6"
Sep  3 00:16:50.148: INFO: Pod azuredisk-volume-tester-sfrv6 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-sfrv6 in namespace azuredisk-3231
STEP: validating provisioned PV
STEP: checking the PV
... skipping 522 lines ...
I0902 23:55:55.849807       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662162955\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662162955\" (2022-09-02 22:55:54 +0000 UTC to 2023-09-02 22:55:54 +0000 UTC (now=2022-09-02 23:55:55.849784111 +0000 UTC))"
I0902 23:55:55.849844       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0902 23:55:55.850011       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0902 23:55:55.850142       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0902 23:55:55.850620       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0902 23:55:55.850769       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0902 23:55:58.277846       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0902 23:55:58.278533       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0902 23:56:01.507684       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0902 23:56:01.507949       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-x56xig-control-plane-k4sjk_d3a9c389-cbcf-4a9a-b67c-00845bfd4031 became leader"
W0902 23:56:01.537200       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0902 23:56:01.538195       1 azure_auth.go:232] Using AzurePublicCloud environment
I0902 23:56:01.538248       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0902 23:56:01.538332       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0902 23:56:01.540676       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:01.540809       1 reflector.go:221] Starting reflector *v1.ServiceAccount (15h39m1.44506714s) from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:01.540854       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:01.542794       1 reflector.go:221] Starting reflector *v1.Secret (15h39m1.44506714s) from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:01.553167       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0902 23:56:01.543144       1 shared_informer.go:255] Waiting for caches to sync for tokens
W0902 23:56:01.569158       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0902 23:56:01.569188       1 controllermanager.go:573] Starting "cronjob"
I0902 23:56:01.574969       1 controllermanager.go:602] Started "cronjob"
I0902 23:56:01.574993       1 controllermanager.go:573] Starting "nodeipam"
W0902 23:56:01.575004       1 controllermanager.go:580] Skipping "nodeipam"
I0902 23:56:01.575029       1 controllermanager.go:573] Starting "clusterrole-aggregation"
I0902 23:56:01.575338       1 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
... skipping 28 lines ...
I0902 23:56:01.622543       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0902 23:56:01.622558       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0902 23:56:01.622575       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0902 23:56:01.622603       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0902 23:56:01.622618       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0902 23:56:01.622632       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0902 23:56:01.622847       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0902 23:56:01.622985       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0902 23:56:01.623154       1 controllermanager.go:602] Started "attachdetach"
I0902 23:56:01.623257       1 controllermanager.go:573] Starting "service"
I0902 23:56:01.623329       1 attach_detach_controller.go:328] Starting attach detach controller
I0902 23:56:01.623568       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0902 23:56:01.654234       1 shared_informer.go:285] caches populated
... skipping 91 lines ...
I0902 23:56:03.613539       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0902 23:56:03.613564       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0902 23:56:03.613620       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0902 23:56:03.613662       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0902 23:56:03.613683       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0902 23:56:03.613704       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0902 23:56:03.613775       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0902 23:56:03.613795       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0902 23:56:03.613913       1 controllermanager.go:602] Started "persistentvolume-binder"
I0902 23:56:03.613938       1 controllermanager.go:573] Starting "pv-protection"
I0902 23:56:03.614088       1 pv_controller_base.go:318] Starting persistent volume controller
I0902 23:56:03.614108       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0902 23:56:03.762800       1 controllermanager.go:602] Started "pv-protection"
... skipping 9 lines ...
I0902 23:56:04.262001       1 graph_builder.go:275] garbage controller monitor not synced: no monitors
I0902 23:56:04.262042       1 graph_builder.go:291] GraphBuilder running
I0902 23:56:04.262292       1 controllermanager.go:602] Started "garbagecollector"
I0902 23:56:04.262408       1 controllermanager.go:573] Starting "horizontalpodautoscaling"
I0902 23:56:04.360009       1 request.go:614] Waited for 97.49195ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/horizontal-pod-autoscaler
I0902 23:56:04.366462       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-control-plane-k4sjk"
W0902 23:56:04.366506       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-x56xig-control-plane-k4sjk" does not exist
I0902 23:56:04.384376       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-control-plane-k4sjk"
I0902 23:56:04.409116       1 request.go:614] Waited for 97.192455ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector/token
I0902 23:56:04.433844       1 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments], removed: []
I0902 23:56:04.434040       1 garbagecollector.go:226] reset restmapper
I0902 23:56:04.458375       1 request.go:614] Waited for 88.832906ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system
I0902 23:56:04.661905       1 controllermanager.go:602] Started "horizontalpodautoscaling"
... skipping 444 lines ...
I0902 23:56:06.937878       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-84994b8c4" need=2 creating=2
I0902 23:56:06.936400       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/coredns-84994b8c4"
I0902 23:56:06.937659       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-84994b8c4 to 2"
I0902 23:56:06.953392       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 23:56:06.93706313 +0000 UTC m=+13.245256227 - now: 2022-09-02 23:56:06.953370807 +0000 UTC m=+13.261563904]
I0902 23:56:06.954290       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0902 23:56:06.973201       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="768.35801ms"
I0902 23:56:06.973505       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:56:06.973644       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-02 23:56:06.973631156 +0000 UTC m=+13.281824253"
I0902 23:56:06.974377       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 23:56:06 +0000 UTC - now: 2022-09-02 23:56:06.974369181 +0000 UTC m=+13.282562178]
I0902 23:56:06.981503       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0902 23:56:06.981903       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="8.26015ms"
I0902 23:56:06.982047       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-02 23:56:06.982033488 +0000 UTC m=+13.290226485"
I0902 23:56:06.982743       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 23:56:06 +0000 UTC - now: 2022-09-02 23:56:06.982734291 +0000 UTC m=+13.290927388]
... skipping 239 lines ...
I0902 23:56:28.781474       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-kube-controllers-755ff8d7b5-njwhs, PodDisruptionBudget controller will avoid syncing.
I0902 23:56:28.781609       1 disruption.go:482] No matching pdb for pod "calico-kube-controllers-755ff8d7b5-njwhs"
I0902 23:56:28.781925       1 controller_utils.go:581] Controller calico-kube-controllers-755ff8d7b5 created pod calico-kube-controllers-755ff8d7b5-njwhs
I0902 23:56:28.782312       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-755ff8d7b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-755ff8d7b5-njwhs"
I0902 23:56:28.782548       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0902 23:56:28.787362       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="33.856472ms"
I0902 23:56:28.787573       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:56:28.787747       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-02 23:56:28.787727704 +0000 UTC m=+35.095920701"
I0902 23:56:28.788501       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 23:56:28 +0000 UTC - now: 2022-09-02 23:56:28.788493825 +0000 UTC m=+35.096686922]
I0902 23:56:28.790696       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-755ff8d7b5"
I0902 23:56:28.793180       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (21.68064ms)
I0902 23:56:28.793247       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bcc3eb2e024899, ext:35080094590, loc:(*time.Location)(0x6f10040)}}
I0902 23:56:28.793425       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 315 lines ...
I0902 23:56:47.949445       1 replica_set.go:457] Pod calico-kube-controllers-755ff8d7b5-njwhs updated, objectMeta {Name:calico-kube-controllers-755ff8d7b5-njwhs GenerateName:calico-kube-controllers-755ff8d7b5- Namespace:kube-system SelfLink: UID:1a93ed84-28f4-41d3-a16e-3d554c3443a5 ResourceVersion:526 Generation:0 CreationTimestamp:2022-09-02 23:56:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:755ff8d7b5] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-755ff8d7b5 UID:f20cd729-f513-474e-87ce-9bca8294102d Controller:0xc001c7d4d7 BlockOwnerDeletion:0xc001c7d4d8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:56:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f20cd729-f513-474e-87ce-9bca8294102d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-02 23:56:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:calico-kube-controllers-755ff8d7b5-njwhs GenerateName:calico-kube-controllers-755ff8d7b5- Namespace:kube-system SelfLink: UID:1a93ed84-28f4-41d3-a16e-3d554c3443a5 ResourceVersion:533 Generation:0 CreationTimestamp:2022-09-02 23:56:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:755ff8d7b5] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-755ff8d7b5 UID:f20cd729-f513-474e-87ce-9bca8294102d Controller:0xc001d381b7 BlockOwnerDeletion:0xc001d381b8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:56:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f20cd729-f513-474e-87ce-9bca8294102d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-02 23:56:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-02 23:56:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0902 23:56:47.949644       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bcc3eb2e024899, ext:35080094590, loc:(*time.Location)(0x6f10040)}}
I0902 23:56:47.949718       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (83.299µs)
I0902 23:56:50.177396       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="213.497µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:44356" resp=200
I0902 23:56:51.050095       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 23:56:51.120537       1 pv_controller_base.go:612] resyncing PV controller
I0902 23:56:51.169586       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-x56xig-control-plane-k4sjk transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 23:56:17 +0000 UTC,LastTransitionTime:2022-09-02 23:55:43 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 23:56:47 +0000 UTC,LastTransitionTime:2022-09-02 23:56:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 23:56:51.169737       1 node_lifecycle_controller.go:1092] Node capz-x56xig-control-plane-k4sjk ReadyCondition updated. Updating timestamp.
I0902 23:56:51.169773       1 node_lifecycle_controller.go:938] Node capz-x56xig-control-plane-k4sjk is healthy again, removing all taints
I0902 23:56:51.169803       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0902 23:56:52.410139       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-control-plane-k4sjk"
I0902 23:56:52.441310       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-control-plane-k4sjk"
I0902 23:56:52.559732       1 disruption.go:494] updatePod called on pod "calico-node-5nsmv"
... skipping 229 lines ...
I0902 23:57:36.121858       1 pv_controller_base.go:612] resyncing PV controller
I0902 23:57:36.312280       1 resource_quota_controller.go:432] no resource updates from discovery, skipping resource quota sync
I0902 23:57:38.476666       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-control-plane-k4sjk"
I0902 23:57:40.176054       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="131.906µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:56210" resp=200
I0902 23:57:41.176317       1 node_lifecycle_controller.go:1092] Node capz-x56xig-control-plane-k4sjk ReadyCondition updated. Updating timestamp.
I0902 23:57:41.811856       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000001"
W0902 23:57:41.813019       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-x56xig-mp-0000001" does not exist
I0902 23:57:41.812123       1 controller.go:690] Syncing backends for all LB services.
I0902 23:57:41.813361       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:57:41.813459       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:57:41.813495       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:57:41.813546       1 controller.go:686] It took 0.001428471 seconds to finish syncNodes
I0902 23:57:41.812212       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-x56xig-mp-0000001}
I0902 23:57:41.813731       1 taint_manager.go:471] "Updating known taints on node" node="capz-x56xig-mp-0000001" taints=[]
I0902 23:57:41.813862       1 topologycache.go:183] Ignoring node capz-x56xig-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-x56xig-mp-0000001" not found]}]
I0902 23:57:41.814035       1 topologycache.go:179] Ignoring node capz-x56xig-control-plane-k4sjk because it has an excluded label
I0902 23:57:41.814129       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0902 23:57:41.814561       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcc3e7293d3d55, ext:19000072350, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:41.814846       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcc3fd70917696, ext:108123032543, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:41.817282       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-x56xig-mp-0000001], creating 1
I0902 23:57:41.816831       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcc3f2de9d7d21, ext:65821830662, loc:(*time.Location)(0x6f10040)}}
... skipping 84 lines ...
I0902 23:57:41.909186       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0902 23:57:41.909264       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0902 23:57:41.909758       1 daemon_controller.go:1119] Updating daemon set status
I0902 23:57:41.909875       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (1.808889ms)
I0902 23:57:41.978447       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000001"
I0902 23:57:42.648332       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000000"
W0902 23:57:42.648373       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-x56xig-mp-0000000" does not exist
I0902 23:57:42.648542       1 controller.go:690] Syncing backends for all LB services.
I0902 23:57:42.648562       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:57:42.648698       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:57:42.648718       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:57:42.648828       1 controller.go:686] It took 0.000331816 seconds to finish syncNodes
I0902 23:57:42.650059       1 topologycache.go:179] Ignoring node capz-x56xig-control-plane-k4sjk because it has an excluded label
I0902 23:57:42.654311       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcc3fd75c0c92f, ext:108210019860, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:42.654528       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcc3fda7032df9, ext:108962712898, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:42.654561       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-x56xig-mp-0000000], creating 1
I0902 23:57:42.655050       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-x56xig-mp-0000000}
I0902 23:57:42.655081       1 taint_manager.go:471] "Updating known taints on node" node="capz-x56xig-mp-0000000" taints=[]
I0902 23:57:42.650085       1 topologycache.go:183] Ignoring node capz-x56xig-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:57:41 +0000 UTC 2022-09-02 23:57:41 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-x56xig-mp-0000001" not found]}]
I0902 23:57:42.655283       1 topologycache.go:183] Ignoring node capz-x56xig-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-02 23:57:42 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:57:42 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:57:42 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:57:42 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-x56xig-mp-0000000" not found]}]
I0902 23:57:42.655319       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0902 23:57:42.656422       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcc3fd7630b7bb, ext:108217355424, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:42.657207       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcc3fda72c13a2, ext:108965393031, loc:(*time.Location)(0x6f10040)}}
I0902 23:57:42.657255       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-x56xig-mp-0000000], creating 1
I0902 23:57:42.666682       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-x56xig-mp-0000000" new_ttl="0s"
I0902 23:57:42.668754       1 disruption.go:479] addPod called on pod "kube-proxy-hwb4z"
... skipping 366 lines ...
I0902 23:58:12.256218       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000001"
I0902 23:58:12.256369       1 controller.go:690] Syncing backends for all LB services.
I0902 23:58:12.256392       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 23:58:12.256443       1 controller.go:753] Finished updateLoadBalancerHosts
I0902 23:58:12.256450       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0902 23:58:12.256459       1 controller.go:686] It took 9.4702e-05 seconds to finish syncNodes
I0902 23:58:12.256950       1 topologycache.go:183] Ignoring node capz-x56xig-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-02 23:58:02 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-02 23:58:02 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-02 23:58:02 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-02 23:58:02 +0000 UTC 2022-09-02 23:57:42 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0902 23:58:12.257109       1 topologycache.go:179] Ignoring node capz-x56xig-control-plane-k4sjk because it has an excluded label
I0902 23:58:12.257125       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0902 23:58:12.257255       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-x56xig-mp-0000001"
I0902 23:58:12.270282       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-x56xig-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0902 23:58:12.270688       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000001"
I0902 23:58:13.863905       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-x56xig-mp-0000000"
... skipping 12 lines ...
I0902 23:58:15.309331       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcc405d26ff8aa, ext:141617521139, loc:(*time.Location)(0x6f10040)}}
I0902 23:58:15.309353       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0902 23:58:15.309450       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0902 23:58:15.309610       1 daemon_controller.go:1119] Updating daemon set status
I0902 23:58:15.309780       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (2.770466ms)
I0902 23:58:15.359735       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000001"
I0902 23:58:16.183043       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-x56xig-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 23:58:02 +0000 UTC,LastTransitionTime:2022-09-02 23:57:41 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 23:58:12 +0000 UTC,LastTransitionTime:2022-09-02 23:58:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 23:58:16.183149       1 node_lifecycle_controller.go:1092] Node capz-x56xig-mp-0000001 ReadyCondition updated. Updating timestamp.
I0902 23:58:16.194145       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000001"
I0902 23:58:16.194470       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-x56xig-mp-0000001}
I0902 23:58:16.194638       1 taint_manager.go:471] "Updating known taints on node" node="capz-x56xig-mp-0000001" taints=[]
I0902 23:58:16.194778       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-x56xig-mp-0000001"
I0902 23:58:16.195693       1 node_lifecycle_controller.go:938] Node capz-x56xig-mp-0000001 is healthy again, removing all taints
... skipping 79 lines ...
I0902 23:58:23.560223       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-x56xig-mp-0000000"
I0902 23:58:23.563153       1 topologycache.go:179] Ignoring node capz-x56xig-control-plane-k4sjk because it has an excluded label
I0902 23:58:23.587804       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000000"
I0902 23:58:23.589009       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-x56xig-mp-0000000" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0902 23:58:26.120921       1 gc_controller.go:221] GC'ing orphaned
I0902 23:58:26.120965       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0902 23:58:26.198418       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-x56xig-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 23:58:02 +0000 UTC,LastTransitionTime:2022-09-02 23:57:42 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 23:58:23 +0000 UTC,LastTransitionTime:2022-09-02 23:58:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 23:58:26.198610       1 node_lifecycle_controller.go:1092] Node capz-x56xig-mp-0000000 ReadyCondition updated. Updating timestamp.
I0902 23:58:26.208937       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000000"
I0902 23:58:26.211509       1 node_lifecycle_controller.go:938] Node capz-x56xig-mp-0000000 is healthy again, removing all taints
I0902 23:58:26.213063       1 node_lifecycle_controller.go:1259] Controller detected that zone westus3::0 is now in state Normal.
I0902 23:58:26.211675       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-x56xig-mp-0000000}
I0902 23:58:26.213478       1 taint_manager.go:471] "Updating known taints on node" node="capz-x56xig-mp-0000000" taints=[]
... skipping 139 lines ...
I0902 23:58:31.886508       1 controller_utils.go:581] Controller csi-azuredisk-controller-6dbf65647f created pod csi-azuredisk-controller-6dbf65647f-tndlw
I0902 23:58:31.886973       1 disruption.go:479] addPod called on pod "csi-azuredisk-controller-6dbf65647f-tndlw"
I0902 23:58:31.888633       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-tndlw, PodDisruptionBudget controller will avoid syncing.
I0902 23:58:31.888696       1 disruption.go:482] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-tndlw"
I0902 23:58:31.887033       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-tndlw"
I0902 23:58:31.887063       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-6dbf65647f-tndlw" podUID=d7884257-55fe-4f55-ac1f-d86f03b06322
I0902 23:58:31.887100       1 replica_set.go:394] Pod csi-azuredisk-controller-6dbf65647f-tndlw created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-6dbf65647f-tndlw", GenerateName:"csi-azuredisk-controller-6dbf65647f-", Namespace:"kube-system", SelfLink:"", UID:"d7884257-55fe-4f55-ac1f-d86f03b06322", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2022, time.September, 2, 23, 58, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"6dbf65647f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-6dbf65647f", UID:"08788e0a-6a62-4605-abbd-c9d75f1890a8", Controller:(*bool)(0xc002635ab7), BlockOwnerDeletion:(*bool)(0xc002635ab8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 2, 23, 58, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00214dcc8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00214dce0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00214dcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-tg28h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001158800), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tg28h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tg28h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tg28h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tg28h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tg28h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001158920)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-tg28h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00287e100), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002635e90), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000174700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002635f00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002635f20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002635f28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002635f2c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0028e56d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0902 23:58:31.888906       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0bcc409f308cd70, ext:158164407893, loc:(*time.Location)(0x6f10040)}}
I0902 23:58:31.889573       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6dbf65647f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6dbf65647f-tndlw"
I0902 23:58:32.058293       1 controller_utils.go:581] Controller csi-azuredisk-controller-6dbf65647f created pod csi-azuredisk-controller-6dbf65647f-5gm65
I0902 23:58:32.058458       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azuredisk-controller-6dbf65647f, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0902 23:58:32.059002       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6dbf65647f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.060152       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-02 23:58:31.856918211 +0000 UTC m=+158.165111208 - now: 2022-09-02 23:58:32.059967929 +0000 UTC m=+158.368160926]
... skipping 5 lines ...
I0902 23:58:32.065903       1 replica_set.go:457] Pod csi-azuredisk-controller-6dbf65647f-tndlw updated, objectMeta {Name:csi-azuredisk-controller-6dbf65647f-tndlw GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:d7884257-55fe-4f55-ac1f-d86f03b06322 ResourceVersion:904 Generation:0 CreationTimestamp:2022-09-02 23:58:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:08788e0a-6a62-4605-abbd-c9d75f1890a8 Controller:0xc002635ab7 BlockOwnerDeletion:0xc002635ab8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:58:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08788e0a-6a62-4605-abbd-c9d75f1890a8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azuredisk-controller-6dbf65647f-tndlw GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:d7884257-55fe-4f55-ac1f-d86f03b06322 ResourceVersion:908 Generation:0 CreationTimestamp:2022-09-02 23:58:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:08788e0a-6a62-4605-abbd-c9d75f1890a8 Controller:0xc002561867 BlockOwnerDeletion:0xc002561868}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:58:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08788e0a-6a62-4605-abbd-c9d75f1890a8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0902 23:58:32.066326       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.066350       1 disruption.go:479] addPod called on pod "csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.066388       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-5gm65, PodDisruptionBudget controller will avoid syncing.
I0902 23:58:32.066394       1 disruption.go:482] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.066424       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-6dbf65647f-5gm65" podUID=3af93cf4-760a-4bb1-9fa2-5974bb1677ab
I0902 23:58:32.066470       1 replica_set.go:394] Pod csi-azuredisk-controller-6dbf65647f-5gm65 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-6dbf65647f-5gm65", GenerateName:"csi-azuredisk-controller-6dbf65647f-", Namespace:"kube-system", SelfLink:"", UID:"3af93cf4-760a-4bb1-9fa2-5974bb1677ab", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2022, time.September, 2, 23, 58, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"6dbf65647f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-6dbf65647f", UID:"08788e0a-6a62-4605-abbd-c9d75f1890a8", Controller:(*bool)(0xc0029fc2b7), BlockOwnerDeletion:(*bool)(0xc0029fc2b8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 2, 23, 58, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0022faa98), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0022faab0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0022faae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-dzsjw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001158f20), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzsjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzsjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzsjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzsjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzsjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001159040)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-dzsjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002afe200), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029fc690), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000175110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029fc700)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029fc720)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0029fc728), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029fc72c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002a12d20), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0902 23:58:32.067110       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0bcc409f308cd70, ext:158164407893, loc:(*time.Location)(0x6f10040)}}
I0902 23:58:32.098255       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-azuredisk-controller-6dbf65647f" (242.215789ms)
I0902 23:58:32.098613       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0bcc409f308cd70, ext:158164407893, loc:(*time.Location)(0x6f10040)}}
I0902 23:58:32.098810       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azuredisk-controller-6dbf65647f, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0902 23:58:32.100575       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-azuredisk-controller-6dbf65647f"
I0902 23:58:32.110537       1 disruption.go:494] updatePod called on pod "csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.111054       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-5gm65, PodDisruptionBudget controller will avoid syncing.
I0902 23:58:32.111069       1 disruption.go:497] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.112793       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-5gm65"
I0902 23:58:32.112922       1 replica_set.go:457] Pod csi-azuredisk-controller-6dbf65647f-5gm65 updated, objectMeta {Name:csi-azuredisk-controller-6dbf65647f-5gm65 GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:3af93cf4-760a-4bb1-9fa2-5974bb1677ab ResourceVersion:910 Generation:0 CreationTimestamp:2022-09-02 23:58:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:08788e0a-6a62-4605-abbd-c9d75f1890a8 Controller:0xc0029fc2b7 BlockOwnerDeletion:0xc0029fc2b8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:58:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08788e0a-6a62-4605-abbd-c9d75f1890a8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azuredisk-controller-6dbf65647f-5gm65 GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:3af93cf4-760a-4bb1-9fa2-5974bb1677ab ResourceVersion:915 Generation:0 CreationTimestamp:2022-09-02 23:58:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:08788e0a-6a62-4605-abbd-c9d75f1890a8 Controller:0xc002a98d57 BlockOwnerDeletion:0xc002a98d58}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 23:58:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08788e0a-6a62-4605-abbd-c9d75f1890a8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0902 23:58:32.120148       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azuredisk-controller" duration="286.292169ms"
I0902 23:58:32.120201       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:58:32.120262       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azuredisk-controller" startTime="2022-09-02 23:58:32.120227869 +0000 UTC m=+158.428420966"
I0902 23:58:32.122102       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-02 23:58:31 +0000 UTC - now: 2022-09-02 23:58:32.12209091 +0000 UTC m=+158.430283907]
I0902 23:58:32.139537       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azuredisk-controller"
I0902 23:58:32.140761       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azuredisk-controller" duration="20.513156ms"
I0902 23:58:32.140829       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azuredisk-controller" startTime="2022-09-02 23:58:32.140810426 +0000 UTC m=+158.449003423"
I0902 23:58:32.142070       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-02 23:58:31 +0000 UTC - now: 2022-09-02 23:58:32.142058754 +0000 UTC m=+158.450251751]
... skipping 60 lines ...
I0902 23:58:36.745982       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-snapshot-controller-84ccd6c756", timestamp:time.Time{wall:0xc0bcc40b2b8c576b, ext:163038810804, loc:(*time.Location)(0x6f10040)}}
I0902 23:58:36.746453       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
I0902 23:58:36.746763       1 controller_utils.go:581] Controller csi-snapshot-controller-84ccd6c756 created pod csi-snapshot-controller-84ccd6c756-8j2r5
I0902 23:58:36.746980       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-02 23:58:36.731293024 +0000 UTC m=+163.039486021 - now: 2022-09-02 23:58:36.746879623 +0000 UTC m=+163.055072720]
I0902 23:58:36.747492       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-84ccd6c756" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-84ccd6c756-8j2r5"
I0902 23:58:36.766483       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="56.723952ms"
I0902 23:58:36.766537       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0902 23:58:36.766593       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-02 23:58:36.766561726 +0000 UTC m=+163.074754723"
I0902 23:58:36.767152       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-02 23:58:36 +0000 UTC - now: 2022-09-02 23:58:36.767144541 +0000 UTC m=+163.075337538]
I0902 23:58:36.785734       1 controller_utils.go:581] Controller csi-snapshot-controller-84ccd6c756 created pod csi-snapshot-controller-84ccd6c756-rmsmn
I0902 23:58:36.786158       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-84ccd6c756" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-84ccd6c756-rmsmn"
I0902 23:58:36.786192       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-84ccd6c756, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0902 23:58:36.789363       1 disruption.go:479] addPod called on pod "csi-snapshot-controller-84ccd6c756-rmsmn"
... skipping 670 lines ...
I0903 00:01:58.763531       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-287676b7-50ae-49bc-acdd-301dade2281f]: claim azuredisk-1353/pvc-8p4nh not found
I0903 00:01:58.769761       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-287676b7-50ae-49bc-acdd-301dade2281f" with version 1673
I0903 00:01:58.770123       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-287676b7-50ae-49bc-acdd-301dade2281f]: phase: Released, bound to: "azuredisk-1353/pvc-8p4nh (uid: 287676b7-50ae-49bc-acdd-301dade2281f)", boundByController: false
I0903 00:01:58.770305       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-287676b7-50ae-49bc-acdd-301dade2281f]: volume is bound to claim azuredisk-1353/pvc-8p4nh
I0903 00:01:58.770441       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-287676b7-50ae-49bc-acdd-301dade2281f]: claim azuredisk-1353/pvc-8p4nh not found
I0903 00:01:58.769879       1 pv_protection_controller.go:198] Got event on PV pvc-287676b7-50ae-49bc-acdd-301dade2281f
I0903 00:01:58.782021       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-287676b7-50ae-49bc-acdd-301dade2281f: Operation cannot be fulfilled on persistentvolumes "pvc-287676b7-50ae-49bc-acdd-301dade2281f": the object has been modified; please apply your changes to the latest version and try again
I0903 00:01:58.782056       1 pv_protection_controller.go:124] Finished processing PV pvc-287676b7-50ae-49bc-acdd-301dade2281f (19.02169ms)
E0903 00:01:58.782071       1 pv_protection_controller.go:114] PV pvc-287676b7-50ae-49bc-acdd-301dade2281f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-287676b7-50ae-49bc-acdd-301dade2281f": the object has been modified; please apply your changes to the latest version and try again
I0903 00:01:58.782107       1 pv_protection_controller.go:121] Processing PV pvc-287676b7-50ae-49bc-acdd-301dade2281f
I0903 00:01:58.787097       1 pv_controller_base.go:238] volume "pvc-287676b7-50ae-49bc-acdd-301dade2281f" deleted
I0903 00:01:58.787427       1 pv_controller_base.go:589] deletion of claim "azuredisk-1353/pvc-8p4nh" was already processed
I0903 00:01:58.787938       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-287676b7-50ae-49bc-acdd-301dade2281f
I0903 00:01:58.788152       1 pv_protection_controller.go:124] Finished processing PV pvc-287676b7-50ae-49bc-acdd-301dade2281f (6.033792ms)
I0903 00:01:58.788287       1 pv_protection_controller.go:121] Processing PV pvc-287676b7-50ae-49bc-acdd-301dade2281f
... skipping 571 lines ...
I0903 00:04:43.996693       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-aa219001-3934-40f0-aa20-653cd0658f9f]: claim azuredisk-1563/pvc-2862x not found
I0903 00:04:44.002918       1 pv_protection_controller.go:198] Got event on PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f
I0903 00:04:44.002918       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-aa219001-3934-40f0-aa20-653cd0658f9f" with version 2159
I0903 00:04:44.003136       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-aa219001-3934-40f0-aa20-653cd0658f9f]: phase: Released, bound to: "azuredisk-1563/pvc-2862x (uid: aa219001-3934-40f0-aa20-653cd0658f9f)", boundByController: false
I0903 00:04:44.003177       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-aa219001-3934-40f0-aa20-653cd0658f9f]: volume is bound to claim azuredisk-1563/pvc-2862x
I0903 00:04:44.003213       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-aa219001-3934-40f0-aa20-653cd0658f9f]: claim azuredisk-1563/pvc-2862x not found
I0903 00:04:44.005539       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f: Operation cannot be fulfilled on persistentvolumes "pvc-aa219001-3934-40f0-aa20-653cd0658f9f": the object has been modified; please apply your changes to the latest version and try again
I0903 00:04:44.005563       1 pv_protection_controller.go:124] Finished processing PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f (9.66871ms)
E0903 00:04:44.005577       1 pv_protection_controller.go:114] PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-aa219001-3934-40f0-aa20-653cd0658f9f": the object has been modified; please apply your changes to the latest version and try again
I0903 00:04:44.005777       1 pv_protection_controller.go:121] Processing PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f
I0903 00:04:44.010133       1 pv_controller_base.go:238] volume "pvc-aa219001-3934-40f0-aa20-653cd0658f9f" deleted
I0903 00:04:44.010632       1 pv_controller_base.go:589] deletion of claim "azuredisk-1563/pvc-2862x" was already processed
I0903 00:04:44.010600       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f
I0903 00:04:44.010694       1 pv_protection_controller.go:124] Finished processing PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f (4.806754ms)
I0903 00:04:44.010868       1 pv_protection_controller.go:121] Processing PV pvc-aa219001-3934-40f0-aa20-653cd0658f9f
... skipping 1554 lines ...
I0903 00:08:35.710583       1 pv_protection_controller.go:121] Processing PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5
I0903 00:08:35.718361       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5" with version 3004
I0903 00:08:35.718410       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5]: phase: Released, bound to: "azuredisk-9336/pvc-k2q4t (uid: 15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5)", boundByController: false
I0903 00:08:35.718441       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5]: volume is bound to claim azuredisk-9336/pvc-k2q4t
I0903 00:08:35.718451       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5]: claim azuredisk-9336/pvc-k2q4t not found
I0903 00:08:35.718467       1 pv_protection_controller.go:198] Got event on PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5
I0903 00:08:35.720930       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5: Operation cannot be fulfilled on persistentvolumes "pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5": the object has been modified; please apply your changes to the latest version and try again
I0903 00:08:35.720954       1 pv_protection_controller.go:124] Finished processing PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5 (10.355486ms)
E0903 00:08:35.721114       1 pv_protection_controller.go:114] PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5": the object has been modified; please apply your changes to the latest version and try again
I0903 00:08:35.721205       1 pv_protection_controller.go:121] Processing PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5
I0903 00:08:35.725603       1 pv_controller_base.go:238] volume "pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5" deleted
I0903 00:08:35.725785       1 pv_controller_base.go:589] deletion of claim "azuredisk-9336/pvc-k2q4t" was already processed
I0903 00:08:35.726655       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5
I0903 00:08:35.726676       1 pv_protection_controller.go:124] Finished processing PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5 (5.436345ms)
I0903 00:08:35.726711       1 pv_protection_controller.go:121] Processing PV pvc-15f7b3f9-bc8a-4c64-a6dc-5c19824dc3d5
... skipping 573 lines ...
I0903 00:10:35.228381       1 pv_controller.go:1788] provisionClaimOperationExternal [azuredisk-2205/pvc-n9wq5] started, class: "azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-fkgpx"
I0903 00:10:35.230062       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd"
I0903 00:10:35.232255       1 replica_set.go:667] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd" (22.407555ms)
I0903 00:10:35.232421       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd", timestamp:time.Time{wall:0xc0bcc4becc853680, ext:881518249929, loc:(*time.Location)(0x6f10040)}}
I0903 00:10:35.232650       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 00:10:35.233124       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-rwvss" duration="27.726634ms"
I0903 00:10:35.233321       1 deployment_controller.go:497] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-rwvss" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-rwvss\": the object has been modified; please apply your changes to the latest version and try again"
I0903 00:10:35.233754       1 deployment_controller.go:583] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-rwvss" startTime="2022-09-03 00:10:35.233727629 +0000 UTC m=+881.541920626"
I0903 00:10:35.234847       1 deployment_util.go:775] Deployment "azuredisk-volume-tester-rwvss" timed out (false) [last progress check: 2022-09-03 00:10:35 +0000 UTC - now: 2022-09-03 00:10:35.234839066 +0000 UTC m=+881.543032163]
I0903 00:10:35.235484       1 pv_controller_base.go:726] storeObjectUpdate updating claim "azuredisk-2205/pvc-n9wq5" with version 3362
I0903 00:10:35.235817       1 pv_controller.go:1814] provisionClaimOperationExternal provisioning claim "azuredisk-2205/pvc-n9wq5": waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I0903 00:10:35.235345       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azuredisk-2205/pvc-n9wq5"
I0903 00:10:35.235383       1 pv_controller_base.go:726] storeObjectUpdate updating claim "azuredisk-2205/pvc-n9wq5" with version 3362
... skipping 236 lines ...
I0903 00:10:54.650976       1 disruption.go:570] No PodDisruptionBudgets found for pod azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z, PodDisruptionBudget controller will avoid syncing.
I0903 00:10:54.650990       1 disruption.go:497] No matching pdb for pod "azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z"
I0903 00:10:54.651208       1 replica_set.go:457] Pod azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z updated, objectMeta {Name:azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z GenerateName:azuredisk-volume-tester-rwvss-5b8948d4fd- Namespace:azuredisk-2205 SelfLink: UID:4fdadca4-5626-4bf1-8781-72673972c39c ResourceVersion:3479 Generation:0 CreationTimestamp:2022-09-03 00:10:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-7660323324116104765 pod-template-hash:5b8948d4fd] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-rwvss-5b8948d4fd UID:89a5b88e-1e87-409b-af82-f6eb10c899f3 Controller:0xc001dab65e BlockOwnerDeletion:0xc001dab65f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 00:10:54 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89a5b88e-1e87-409b-af82-f6eb10c899f3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z GenerateName:azuredisk-volume-tester-rwvss-5b8948d4fd- Namespace:azuredisk-2205 SelfLink: UID:4fdadca4-5626-4bf1-8781-72673972c39c ResourceVersion:3486 Generation:0 CreationTimestamp:2022-09-03 00:10:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-7660323324116104765 pod-template-hash:5b8948d4fd] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-rwvss-5b8948d4fd UID:89a5b88e-1e87-409b-af82-f6eb10c899f3 Controller:0xc0005bd99e BlockOwnerDeletion:0xc0005bd99f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 00:10:54 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89a5b88e-1e87-409b-af82-f6eb10c899f3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 00:10:54 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0903 00:10:54.651715       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd", timestamp:time.Time{wall:0xc0bcc4c39f8bca09, ext:900837447918, loc:(*time.Location)(0x6f10040)}}
I0903 00:10:54.651901       1 controller_utils.go:938] Ignoring inactive pod azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd-m7g45 in state Running, deletion time 2022-09-03 00:11:24 +0000 UTC
I0903 00:10:54.652085       1 replica_set.go:667] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd" (415.224µs)
I0903 00:10:54.711721       1 reconciler.go:420] "Multi-Attach error: volume is already used by pods" pods=[azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd-m7g45] attachedTo=[capz-x56xig-mp-0000001] volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-x56xig/providers/Microsoft.Compute/disks/pvc-7e6a3dfb-f02a-495d-b7e0-83ddf3c722e0 VolumeSpec:0xc0012ad368 NodeName:capz-x56xig-mp-0000000 ScheduledPods:[&Pod{ObjectMeta:{azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z azuredisk-volume-tester-rwvss-5b8948d4fd- azuredisk-2205  4fdadca4-5626-4bf1-8781-72673972c39c 3479 0 2022-09-03 00:10:54 +0000 UTC <nil> <nil> map[app:azuredisk-volume-tester-7660323324116104765 pod-template-hash:5b8948d4fd] map[] [{apps/v1 ReplicaSet azuredisk-volume-tester-rwvss-5b8948d4fd 89a5b88e-1e87-409b-af82-f6eb10c899f3 0xc001dab65e 0xc001dab65f}] [] [{kube-controller-manager Update v1 2022-09-03 00:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89a5b88e-1e87-409b-af82-f6eb10c899f3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:test-volume-1,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:pvc-n9wq5,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-88n6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:volume-tester,Image:k8s.gcr.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh],Args:[-c echo 'hello world' >> /mnt/test-1/data && while true; do sleep 3600; done],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume-1,ReadOnly:false,MountPath:/mnt/test-1,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-88n6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{kubernetes.io/os: linux,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-x56xig-mp-0000000,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 00:10:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}
I0903 00:10:54.712119       1 event.go:294] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-rwvss-5b8948d4fd-j6x5z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-7e6a3dfb-f02a-495d-b7e0-83ddf3c722e0\" Volume is already used by pod(s) azuredisk-volume-tester-rwvss-5b8948d4fd-m7g45"
I0903 00:10:59.506047       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-x56xig-mp-0000000"
I0903 00:11:00.176126       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="152.303µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:48292" resp=200
I0903 00:11:01.063325       1 reflector.go:559] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 49 items received
I0903 00:11:01.342362       1 node_lifecycle_controller.go:1092] Node capz-x56xig-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 00:11:02.557012       1 csi_attacher.go:208] kubernetes.io/csi: probing attachment status for 1 volume(s) 
I0903 00:11:04.064599       1 reflector.go:559] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 18 items received
... skipping 3139 lines ...
I0903 00:18:02.663030       1 pv_protection_controller.go:121] Processing PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d
I0903 00:18:02.669932       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-18bf6011-ec69-4b2f-9779-a029f420218d" with version 5041
I0903 00:18:02.670349       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-18bf6011-ec69-4b2f-9779-a029f420218d]: phase: Released, bound to: "azuredisk-3231/pvc-sz8ff (uid: 18bf6011-ec69-4b2f-9779-a029f420218d)", boundByController: false
I0903 00:18:02.670584       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-18bf6011-ec69-4b2f-9779-a029f420218d]: volume is bound to claim azuredisk-3231/pvc-sz8ff
I0903 00:18:02.670742       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-18bf6011-ec69-4b2f-9779-a029f420218d]: claim azuredisk-3231/pvc-sz8ff not found
I0903 00:18:02.670925       1 pv_protection_controller.go:198] Got event on PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d
I0903 00:18:02.673429       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d: Operation cannot be fulfilled on persistentvolumes "pvc-18bf6011-ec69-4b2f-9779-a029f420218d": the object has been modified; please apply your changes to the latest version and try again
I0903 00:18:02.673516       1 pv_protection_controller.go:124] Finished processing PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d (10.476921ms)
E0903 00:18:02.673534       1 pv_protection_controller.go:114] PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d failed with : Operation cannot be fulfilled on persistentvolumes "pvc-18bf6011-ec69-4b2f-9779-a029f420218d": the object has been modified; please apply your changes to the latest version and try again
I0903 00:18:02.673560       1 pv_protection_controller.go:121] Processing PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d
I0903 00:18:02.677951       1 pv_controller_base.go:238] volume "pvc-18bf6011-ec69-4b2f-9779-a029f420218d" deleted
I0903 00:18:02.678115       1 pv_controller_base.go:589] deletion of claim "azuredisk-3231/pvc-sz8ff" was already processed
I0903 00:18:02.678703       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d
I0903 00:18:02.678719       1 pv_protection_controller.go:124] Finished processing PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d (5.115759ms)
I0903 00:18:02.678733       1 pv_protection_controller.go:121] Processing PV pvc-18bf6011-ec69-4b2f-9779-a029f420218d
... skipping 981 lines ...
I0903 00:20:53.451489       1 reflector.go:559] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 10 items received
2022/09/03 00:20:56 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1198.264 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-nfkfp, container manager
STEP: Dumping workload cluster default/capz-x56xig logs
Sep  3 00:22:30.998: INFO: Collecting logs for Linux node capz-x56xig-control-plane-k4sjk in cluster capz-x56xig in namespace default

Sep  3 00:23:30.999: INFO: Collecting boot logs for AzureMachine capz-x56xig-control-plane-k4sjk

Failed to get logs for machine capz-x56xig-control-plane-mzdd8, cluster default/capz-x56xig: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 00:23:32.092: INFO: Collecting logs for Linux node capz-x56xig-mp-0000000 in cluster capz-x56xig in namespace default

Sep  3 00:24:32.094: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-x56xig-mp-0

Sep  3 00:24:32.476: INFO: Collecting logs for Linux node capz-x56xig-mp-0000001 in cluster capz-x56xig in namespace default

Sep  3 00:25:32.479: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-x56xig-mp-0

Failed to get logs for machine pool capz-x56xig-mp-0, cluster default/capz-x56xig: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-x56xig kube-system pod logs
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-24sbz
STEP: Creating log watcher for controller kube-system/calico-node-pk9sf, container calico-node
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-5gm65, container csi-snapshotter
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-8h7nq
STEP: Collecting events for Pod kube-system/calico-node-pk9sf
... skipping 18 lines ...
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-65ldf, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-lzklf, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-lzklf, container azuredisk
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-x56xig-control-plane-k4sjk, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-5gm65, container csi-resizer
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-x56xig-control-plane-k4sjk
STEP: failed to find events of Pod "kube-controller-manager-capz-x56xig-control-plane-k4sjk"
STEP: Creating log watcher for controller kube-system/kube-proxy-8zxrz, container kube-proxy
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-65ldf, container azuredisk
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-lzklf, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-5gm65, container liveness-probe
STEP: Collecting events for Pod kube-system/csi-azuredisk-node-lzklf
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-5gm65, container azuredisk
... skipping 4 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-x56xig-control-plane-k4sjk, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-g69gs, container kube-proxy
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-rmsmn, container csi-snapshot-controller
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-tndlw, container csi-snapshotter
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-x56xig-control-plane-k4sjk
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-tndlw, container csi-provisioner
STEP: failed to find events of Pod "kube-scheduler-capz-x56xig-control-plane-k4sjk"
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-84ccd6c756-rmsmn
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-84ccd6c756-8j2r5
STEP: Collecting events for Pod kube-system/csi-azuredisk-controller-6dbf65647f-5gm65
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-x56xig-control-plane-k4sjk, container kube-scheduler
STEP: Collecting events for Pod kube-system/etcd-capz-x56xig-control-plane-k4sjk
STEP: failed to find events of Pod "etcd-capz-x56xig-control-plane-k4sjk"
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-8j2r5, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/kube-proxy-8zxrz
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-x56xig-control-plane-k4sjk, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-hwb4z, container kube-proxy
STEP: Collecting events for Pod kube-system/csi-azuredisk-node-rz868
STEP: Collecting events for Pod kube-system/kube-proxy-hwb4z
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-x56xig-control-plane-k4sjk
STEP: failed to find events of Pod "kube-apiserver-capz-x56xig-control-plane-k4sjk"
STEP: Collecting events for Pod kube-system/kube-proxy-g69gs
STEP: Fetching activity logs took 2.885562765s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-x56xig" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
... skipping 13 lines ...