This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2022-06-25 19:35
Elapsed55m12s
Revisionrelease-1.3

Test Failures


AzureDisk CSI Driver End-to-End Tests Dynamic Provisioning [single-az] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 7m54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureDisk\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\s\[single\-az\]\sshould\screate\sa\sdeployment\sobject\,\swrite\sand\sread\sto\sit\,\sdelete\sthe\spod\sand\swrite\sand\sread\sto\sit\sagain\s\[kubernetes\.io\/azure\-disk\]\s\[disk\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:397
Unexpected error:
    <*errors.errorString | 0xc000118840>: {
        s: "error waiting for deployment \"azuredisk-volume-tester-lc2bk\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"azuredisk-volume-tester-lc2bk-754c97cc\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "azuredisk-volume-tester-lc2bk" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"azuredisk-volume-tester-lc2bk-754c97cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:503
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 622 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 125 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-52trnh-kubeconfig; do sleep 1; done"
capz-52trnh-kubeconfig                 cluster.x-k8s.io/secret               1      1s
# Get kubeconfig and store it locally.
kubectl get secrets capz-52trnh-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-52trnh-control-plane-42w5n   NotReady   control-plane,master   1s    v1.23.9-rc.0.3+d11725a28e7e6e
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-52trnh-control-plane-42w5n condition met
node/capz-52trnh-mp-0000000 condition met
... skipping 131 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Jun 25 19:51:41.933: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [1.023 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 25 19:51:47.650: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mdh9k" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Jun 25 19:51:47.759: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 108.384518ms
Jun 25 19:51:49.869: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218580729s
Jun 25 19:51:51.978: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327541669s
Jun 25 19:51:54.087: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436441561s
Jun 25 19:51:56.196: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545709683s
Jun 25 19:51:58.306: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655418869s
... skipping 4 lines ...
Jun 25 19:52:08.859: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 21.208364392s
Jun 25 19:52:10.969: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 23.318495322s
Jun 25 19:52:13.084: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Pending", Reason="", readiness=false. Elapsed: 25.433732624s
Jun 25 19:52:15.199: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Running", Reason="", readiness=false. Elapsed: 27.548197786s
Jun 25 19:52:17.314: INFO: Pod "azuredisk-volume-tester-mdh9k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.663156996s
STEP: Saw pod success
Jun 25 19:52:17.314: INFO: Pod "azuredisk-volume-tester-mdh9k" satisfied condition "Succeeded or Failed"
Jun 25 19:52:17.314: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-mdh9k"
Jun 25 19:52:17.438: INFO: Pod azuredisk-volume-tester-mdh9k has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-mdh9k in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
... skipping 98 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Jun 25 19:53:22.226: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-lgxt7"
Jun 25 19:53:22.336: INFO: Error getting logs for pod azuredisk-volume-tester-lgxt7: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-lgxt7)
STEP: Deleting pod azuredisk-volume-tester-lgxt7 in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Jun 25 19:53:22.665: INFO: deleting PVC "azuredisk-1563"/"pvc-zkpcp"
Jun 25 19:53:22.665: INFO: Deleting PersistentVolumeClaim "pvc-zkpcp"
STEP: waiting for claim's PV "pvc-1dc3ae99-c173-43b4-bf3c-9bf538a80cc9" to be deleted
... skipping 58 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 25 19:55:53.181: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-thxtc" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Jun 25 19:55:53.289: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 108.473094ms
Jun 25 19:55:55.398: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217373319s
Jun 25 19:55:57.508: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326759829s
Jun 25 19:55:59.618: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437134587s
Jun 25 19:56:01.729: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547883546s
Jun 25 19:56:03.838: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657190041s
... skipping 2 lines ...
Jun 25 19:56:10.168: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.987545823s
Jun 25 19:56:12.283: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.102317552s
Jun 25 19:56:14.398: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Running", Reason="", readiness=true. Elapsed: 21.21755722s
Jun 25 19:56:16.514: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Running", Reason="", readiness=false. Elapsed: 23.332809629s
Jun 25 19:56:18.628: INFO: Pod "azuredisk-volume-tester-thxtc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.447397599s
STEP: Saw pod success
Jun 25 19:56:18.628: INFO: Pod "azuredisk-volume-tester-thxtc" satisfied condition "Succeeded or Failed"
Jun 25 19:56:18.628: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-thxtc"
Jun 25 19:56:18.755: INFO: Pod azuredisk-volume-tester-thxtc has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-thxtc in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
... skipping 40 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Jun 25 19:57:02.258: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hh2d2" in namespace "azuredisk-9241" to be "Error status code"
Jun 25 19:57:02.366: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 108.416729ms
Jun 25 19:57:04.477: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219090846s
Jun 25 19:57:06.588: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330172143s
Jun 25 19:57:08.697: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439776339s
Jun 25 19:57:10.806: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548900414s
Jun 25 19:57:12.916: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658061418s
Jun 25 19:57:15.025: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.767864432s
Jun 25 19:57:17.134: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.876643237s
Jun 25 19:57:19.244: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.986191901s
Jun 25 19:57:21.360: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.101996427s
Jun 25 19:57:23.475: INFO: Pod "azuredisk-volume-tester-hh2d2": Phase="Failed", Reason="", readiness=false. Elapsed: 21.217261016s
STEP: Saw pod failure
Jun 25 19:57:23.475: INFO: Pod "azuredisk-volume-tester-hh2d2" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Jun 25 19:57:23.586: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-hh2d2"
Jun 25 19:57:23.697: INFO: Pod azuredisk-volume-tester-hh2d2 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-hh2d2 in namespace azuredisk-9241
STEP: validating provisioned PV
... skipping 372 lines ...
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:02:53 +0000 UTC - event for azuredisk-volume-tester-lc2bk-754c97cc: {replicaset-controller } SuccessfulCreate: Created pod: azuredisk-volume-tester-lc2bk-754c97cc-98rxv
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:02:53 +0000 UTC - event for pvc-x6gwz: {disk.csi.azure.com_capz-52trnh-mp-0000000_f8e37a38-c853-4714-920c-0bd39ee8c7da } Provisioning: External provisioner is provisioning volume for claim "azuredisk-2205/pvc-x6gwz"
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:02:53 +0000 UTC - event for pvc-x6gwz: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:02:53 +0000 UTC - event for pvc-x6gwz: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:02:56 +0000 UTC - event for azuredisk-volume-tester-lc2bk-754c97cc-98rxv: {default-scheduler } Scheduled: Successfully assigned azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc-98rxv to capz-52trnh-mp-0000001
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:02:56 +0000 UTC - event for pvc-x6gwz: {disk.csi.azure.com_capz-52trnh-mp-0000000_f8e37a38-c853-4714-920c-0bd39ee8c7da } ProvisioningSucceeded: Successfully provisioned volume pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:04:56 +0000 UTC - event for azuredisk-volume-tester-lc2bk-754c97cc-98rxv: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c
Jun 25 20:10:43.681: INFO: At 2022-06-25 20:04:59 +0000 UTC - event for azuredisk-volume-tester-lc2bk-754c97cc-98rxv: {kubelet capz-52trnh-mp-0000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume-1], unattached volumes=[test-volume-1 kube-api-access-crdwx]: timed out waiting for the condition
Jun 25 20:10:43.789: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jun 25 20:10:43.789: INFO: 
Jun 25 20:10:43.943: INFO: 
Logging node info for node capz-52trnh-control-plane-42w5n
Jun 25 20:10:44.062: INFO: Node Info: &Node{ObjectMeta:{capz-52trnh-control-plane-42w5n    b6062010-d952-459a-a2e9-ce9d99c09094 4211 0 2022-06-25 19:46:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:westeurope-3 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-52trnh-control-plane-42w5n kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.disk.csi.azure.com/zone:westeurope-3 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:westeurope-3] map[cluster.x-k8s.io/cluster-name:capz-52trnh cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-52trnh-control-plane-tgwcg cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-52trnh-control-plane csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-52trnh-control-plane-42w5n"} kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.60.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-25 19:46:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kubelet Update v1 2022-06-25 19:46:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2022-06-25 19:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-25 19:47:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-25 19:47:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-25 19:49:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/virtualMachines/capz-52trnh-control-plane-42w5n,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-25 19:47:45 +0000 UTC,LastTransitionTime:2022-06-25 19:47:45 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-25 20:05:48 +0000 UTC,LastTransitionTime:2022-06-25 19:46:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-25 20:05:48 +0000 UTC,LastTransitionTime:2022-06-25 19:46:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-25 20:05:48 +0000 UTC,LastTransitionTime:2022-06-25 19:46:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-25 20:05:48 +0000 UTC,LastTransitionTime:2022-06-25 19:47:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-52trnh-control-plane-42w5n,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5627b227d837499582b30577e2292dde,SystemUUID:e82bcb69-c770-6b45-828e-3242a84d2493,BootID:d4333410-d9e2-4b0e-9376-c71d1509c6f5,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.23.9-rc.0.3+d11725a28e7e6e,KubeProxyVersion:v1.23.9-rc.0.3+d11725a28e7e6e,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-apiserver-amd64:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-apiserver:v1.23.9-rc.0.3_d11725a28e7e6e],SizeBytes:133592668,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-controller-manager-amd64:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-controller-manager:v1.23.9-rc.0.3_d11725a28e7e6e],SizeBytes:123414466,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-proxy-amd64:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-proxy:v1.23.9-rc.0.3_d11725a28e7e6e],SizeBytes:114210871,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263 k8s.gcr.io/etcd:3.5.1-0],SizeBytes:98888614,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azuredisk-csi@sha256:9c81e3704964693f18dede0cd04dd651b6b995eac8b0d85ab8199c92a03b3b56 mcr.microsoft.com/k8s/csi/azuredisk-csi:latest],SizeBytes:88161049,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-scheduler-amd64:v1.23.9-rc.0.3_d11725a28e7e6e k8s.gcr.io/kube-scheduler:v1.23.9-rc.0.3_d11725a28e7e6e],SizeBytes:51931073,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 91 lines ...
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:40
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:43
    should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:397

    Unexpected error:
        <*errors.errorString | 0xc000118840>: {
            s: "error waiting for deployment \"azuredisk-volume-tester-lc2bk\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"azuredisk-volume-tester-lc2bk-754c97cc\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
        }
        error waiting for deployment "azuredisk-volume-tester-lc2bk" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 25, 20, 2, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"azuredisk-volume-tester-lc2bk-754c97cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
    occurred

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:503
------------------------------
Dynamic Provisioning [single-az] 
  should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows]
... skipping 149 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 25 20:11:04.688: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kxs6v" in namespace "azuredisk-1387" to be "Succeeded or Failed"
Jun 25 20:11:04.797: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 108.252894ms
Jun 25 20:11:06.907: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218056881s
Jun 25 20:11:09.018: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329930852s
Jun 25 20:11:11.131: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442490645s
Jun 25 20:11:13.243: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554399078s
Jun 25 20:11:15.355: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.666331999s
... skipping 5 lines ...
Jun 25 20:11:28.024: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 23.335411373s
Jun 25 20:11:30.135: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 25.446214182s
Jun 25 20:11:32.247: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 27.55871592s
Jun 25 20:11:34.359: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 29.670276952s
Jun 25 20:11:36.470: INFO: Pod "azuredisk-volume-tester-kxs6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.781239295s
STEP: Saw pod success
Jun 25 20:11:36.470: INFO: Pod "azuredisk-volume-tester-kxs6v" satisfied condition "Succeeded or Failed"
Jun 25 20:11:36.470: INFO: deleting Pod "azuredisk-1387"/"azuredisk-volume-tester-kxs6v"
Jun 25 20:11:36.581: INFO: Pod azuredisk-volume-tester-kxs6v has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-kxs6v in namespace azuredisk-1387
... skipping 69 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 25 20:12:31.805: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-77dq4" in namespace "azuredisk-4547" to be "Succeeded or Failed"
Jun 25 20:12:31.914: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 108.478867ms
Jun 25 20:12:34.024: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218705972s
Jun 25 20:12:36.134: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328737328s
Jun 25 20:12:38.244: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438593647s
Jun 25 20:12:40.354: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548962197s
Jun 25 20:12:42.464: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659227619s
... skipping 4 lines ...
Jun 25 20:12:53.014: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.208455253s
Jun 25 20:12:55.123: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.318063095s
Jun 25 20:12:57.232: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 25.427110161s
Jun 25 20:12:59.344: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.53911683s
Jun 25 20:13:01.456: INFO: Pod "azuredisk-volume-tester-77dq4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.650495879s
STEP: Saw pod success
Jun 25 20:13:01.456: INFO: Pod "azuredisk-volume-tester-77dq4" satisfied condition "Succeeded or Failed"
Jun 25 20:13:01.456: INFO: deleting Pod "azuredisk-4547"/"azuredisk-volume-tester-77dq4"
Jun 25 20:13:01.587: INFO: Pod azuredisk-volume-tester-77dq4 has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.056975 seconds, 1.7GB/s
hello world

... skipping 118 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 25 20:13:59.296: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rm66g" in namespace "azuredisk-7578" to be "Succeeded or Failed"
Jun 25 20:13:59.405: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 108.441629ms
Jun 25 20:14:01.516: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21989621s
Jun 25 20:14:03.627: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330785424s
Jun 25 20:14:05.739: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442386314s
Jun 25 20:14:07.850: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553807791s
Jun 25 20:14:09.962: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665864535s
... skipping 6 lines ...
Jun 25 20:14:24.742: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 25.445980626s
Jun 25 20:14:26.854: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 27.557393503s
Jun 25 20:14:28.966: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 29.669711389s
Jun 25 20:14:31.081: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Pending", Reason="", readiness=false. Elapsed: 31.784638279s
Jun 25 20:14:33.193: INFO: Pod "azuredisk-volume-tester-rm66g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.896120292s
STEP: Saw pod success
Jun 25 20:14:33.193: INFO: Pod "azuredisk-volume-tester-rm66g" satisfied condition "Succeeded or Failed"
Jun 25 20:14:33.193: INFO: deleting Pod "azuredisk-7578"/"azuredisk-volume-tester-rm66g"
Jun 25 20:14:33.311: INFO: Pod azuredisk-volume-tester-rm66g has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-rm66g in namespace azuredisk-7578
STEP: validating provisioned PV
STEP: checking the PV
... skipping 530 lines ...
I0625 19:46:40.484359       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1656186399\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1656186399\" (2022-06-25 18:46:38 +0000 UTC to 2023-06-25 18:46:38 +0000 UTC (now=2022-06-25 19:46:40.484329155 +0000 UTC))"
I0625 19:46:40.484619       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1656186400\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1656186399\" (2022-06-25 18:46:39 +0000 UTC to 2023-06-25 18:46:39 +0000 UTC (now=2022-06-25 19:46:40.484590156 +0000 UTC))"
I0625 19:46:40.484654       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0625 19:46:40.485086       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0625 19:46:40.485837       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0625 19:46:40.486106       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0625 19:46:42.445304       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0625 19:46:42.445348       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0625 19:46:46.094110       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0625 19:46:46.094478       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-52trnh-control-plane-42w5n_5c668aac-bac1-4c78-98fd-d92fec489fbb became leader"
I0625 19:46:46.278990       1 request.go:597] Waited for 81.869324ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/certificates.k8s.io/v1
W0625 19:46:46.280507       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0625 19:46:46.281224       1 azure_auth.go:232] Using AzurePublicCloud environment
I0625 19:46:46.281373       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
... skipping 30 lines ...
I0625 19:46:46.283973       1 reflector.go:219] Starting reflector *v1.ServiceAccount (15h6m49.75675111s) from k8s.io/client-go/informers/factory.go:134
I0625 19:46:46.284121       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0625 19:46:46.283973       1 reflector.go:219] Starting reflector *v1.Node (15h6m49.75675111s) from k8s.io/client-go/informers/factory.go:134
I0625 19:46:46.285682       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0625 19:46:46.285427       1 reflector.go:219] Starting reflector *v1.Secret (15h6m49.75675111s) from k8s.io/client-go/informers/factory.go:134
I0625 19:46:46.287229       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0625 19:46:46.307738       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0625 19:46:46.307774       1 controllermanager.go:576] Starting "endpoint"
I0625 19:46:46.314364       1 controllermanager.go:605] Started "endpoint"
I0625 19:46:46.314479       1 controllermanager.go:576] Starting "endpointslicemirroring"
I0625 19:46:46.314383       1 endpoints_controller.go:193] Starting endpoint controller
I0625 19:46:46.314662       1 shared_informer.go:240] Waiting for caches to sync for endpoint
I0625 19:46:46.320938       1 controllermanager.go:605] Started "endpointslicemirroring"
... skipping 10 lines ...
I0625 19:46:46.334569       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0625 19:46:46.334649       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0625 19:46:46.334736       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0625 19:46:46.334842       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0625 19:46:46.334864       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0625 19:46:46.334877       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0625 19:46:46.334901       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0625 19:46:46.334968       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0625 19:46:46.335212       1 controllermanager.go:605] Started "attachdetach"
I0625 19:46:46.335231       1 controllermanager.go:576] Starting "job"
I0625 19:46:46.335405       1 attach_detach_controller.go:328] Starting attach detach controller
I0625 19:46:46.335422       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0625 19:46:46.341331       1 controllermanager.go:605] Started "job"
... skipping 40 lines ...
I0625 19:46:46.540869       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0625 19:46:46.541036       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
I0625 19:46:46.541214       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0625 19:46:46.541371       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0625 19:46:46.541931       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0625 19:46:46.542124       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0625 19:46:46.542319       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0625 19:46:46.542485       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0625 19:46:46.542717       1 controllermanager.go:605] Started "persistentvolume-binder"
I0625 19:46:46.542867       1 controllermanager.go:576] Starting "persistentvolume-expander"
I0625 19:46:46.543155       1 pv_controller_base.go:310] Starting persistent volume controller
I0625 19:46:46.543509       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0625 19:46:46.687523       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
... skipping 56 lines ...
I0625 19:46:48.188651       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0625 19:46:48.354111       1 controllermanager.go:605] Started "replicaset"
I0625 19:46:48.354159       1 controllermanager.go:576] Starting "statefulset"
I0625 19:46:48.354232       1 replica_set.go:186] Starting replicaset controller
I0625 19:46:48.354242       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0625 19:46:48.359021       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-control-plane-42w5n"
W0625 19:46:48.359120       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-52trnh-control-plane-42w5n" does not exist
I0625 19:46:48.396870       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-control-plane-42w5n"
I0625 19:46:48.424302       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-control-plane-42w5n"
I0625 19:46:48.499979       1 controllermanager.go:605] Started "statefulset"
I0625 19:46:48.500031       1 controllermanager.go:576] Starting "bootstrapsigner"
I0625 19:46:48.500089       1 stateful_set.go:147] Starting stateful set controller
I0625 19:46:48.500118       1 shared_informer.go:240] Waiting for caches to sync for stateful set
... skipping 519 lines ...
I0625 19:46:59.619175       1 garbagecollector.go:468] "Processing object" object="capz-52trnh-control-plane-42w5n" objectUID=9a99faa8-b4ed-4b7c-b4fc-cfa0b5a73860 kind="CSINode" virtual=false
I0625 19:46:59.625417       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0625 19:46:59.639408       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-06-25 19:46:59.608350448 +0000 UTC m=+20.984492761 - now: 2022-06-25 19:46:59.639398553 +0000 UTC m=+21.015540866]
I0625 19:46:59.682627       1 publisher.go:186] Finished syncing namespace "kube-system" (8.693844425s)
I0625 19:46:59.683182       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (8.70549727s)
I0625 19:46:59.705386       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="8.750444493s"
I0625 19:46:59.705414       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0625 19:46:59.705470       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-06-25 19:46:59.705442253 +0000 UTC m=+21.081584966"
I0625 19:46:59.706196       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-06-25 19:46:59 +0000 UTC - now: 2022-06-25 19:46:59.706188051 +0000 UTC m=+21.082330764]
I0625 19:46:59.706496       1 serviceaccounts_controller.go:188] Finished syncing namespace "kube-public" (96.931806ms)
I0625 19:46:59.730034       1 publisher.go:186] Finished syncing namespace "kube-public" (47.370856ms)
I0625 19:46:59.730332       1 garbagecollector.go:519] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"capz-52trnh-control-plane-42w5n", UID:"9a99faa8-b4ed-4b7c-b4fc-cfa0b5a73860", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"capz-52trnh-control-plane-42w5n", UID:"b6062010-d952-459a-a2e9-ce9d99c09094", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect
I0625 19:46:59.730407       1 garbagecollector.go:519] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"capz-52trnh-control-plane-42w5n", UID:"1c5465e7-1e30-4007-9e74-666a4994d4a0", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"capz-52trnh-control-plane-42w5n", UID:"b6062010-d952-459a-a2e9-ce9d99c09094", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect
... skipping 156 lines ...
I0625 19:47:17.393350       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-85f479877b-zwbcq, PodDisruptionBudget controller will avoid syncing.
I0625 19:47:17.393485       1 disruption.go:418] No matching pdb for pod "calico-kube-controllers-85f479877b-zwbcq"
I0625 19:47:17.393116       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/calico-kube-controllers-85f479877b-zwbcq"
I0625 19:47:17.392829       1 replica_set.go:380] Pod calico-kube-controllers-85f479877b-zwbcq created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-kube-controllers-85f479877b-zwbcq", GenerateName:"calico-kube-controllers-85f479877b-", Namespace:"kube-system", SelfLink:"", UID:"d024d789-4e0c-411b-b1af-3366d3515b4d", ResourceVersion:"561", Generation:0, CreationTimestamp:time.Date(2022, time.June, 25, 19, 47, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"85f479877b"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-kube-controllers-85f479877b", UID:"d8eaa70c-565b-4c32-a1ba-80fc5ac65afe", Controller:(*bool)(0xc001db3c3e), BlockOwnerDeletion:(*bool)(0xc001db3c3f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 25, 19, 47, 17, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001689c98), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-mzk6g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0016b4560), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-kube-controllers", Image:"docker.io/calico/kube-controllers:v3.23.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ENABLED_CONTROLLERS", Value:"node", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mzk6g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001d94100), ReadinessProbe:(*v1.Probe)(0xc001d94140), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001db3cf0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-kube-controllers", DeprecatedServiceAccount:"calico-kube-controllers", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a44380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001db3d50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001db3d70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001db3d78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001db3d7c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0013ccef0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0625 19:47:17.393931       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-85f479877b", timestamp:time.Time{wall:0xc0a5f771567d93f3, ext:38753471640, loc:(*time.Location)(0x77933c0)}}
I0625 19:47:17.396520       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="27.45066ms"
I0625 19:47:17.396692       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0625 19:47:17.396844       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-06-25 19:47:17.39683067 +0000 UTC m=+38.772972883"
I0625 19:47:17.397330       1 deployment_util.go:774] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-06-25 19:47:17 +0000 UTC - now: 2022-06-25 19:47:17.397321871 +0000 UTC m=+38.773464384]
I0625 19:47:17.397646       1 controller_utils.go:581] Controller calico-kube-controllers-85f479877b created pod calico-kube-controllers-85f479877b-zwbcq
I0625 19:47:17.398475       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-85f479877b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-85f479877b-zwbcq"
I0625 19:47:17.398251       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-85f479877b, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0625 19:47:17.414191       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-85f479877b-zwbcq" podUID=d024d789-4e0c-411b-b1af-3366d3515b4d
... skipping 326 lines ...
I0625 19:47:36.985637       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-85f479877b-zwbcq"
I0625 19:47:36.985670       1 disruption.go:433] updatePod "calico-kube-controllers-85f479877b-zwbcq" -> PDB "calico-kube-controllers"
I0625 19:47:36.985748       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (47.4µs)
I0625 19:47:36.985737       1 replica_set.go:443] Pod calico-kube-controllers-85f479877b-zwbcq updated, objectMeta {Name:calico-kube-controllers-85f479877b-zwbcq GenerateName:calico-kube-controllers-85f479877b- Namespace:kube-system SelfLink: UID:d024d789-4e0c-411b-b1af-3366d3515b4d ResourceVersion:627 Generation:0 CreationTimestamp:2022-06-25 19:47:17 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:85f479877b] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-85f479877b UID:d8eaa70c-565b-4c32-a1ba-80fc5ac65afe Controller:0xc00242e99e BlockOwnerDeletion:0xc00242e99f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:47:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8eaa70c-565b-4c32-a1ba-80fc5ac65afe\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-25 19:47:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:calico-kube-controllers-85f479877b-zwbcq GenerateName:calico-kube-controllers-85f479877b- Namespace:kube-system SelfLink: UID:d024d789-4e0c-411b-b1af-3366d3515b4d ResourceVersion:633 Generation:0 CreationTimestamp:2022-06-25 19:47:17 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:85f479877b] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-85f479877b UID:d8eaa70c-565b-4c32-a1ba-80fc5ac65afe Controller:0xc00242ff1e BlockOwnerDeletion:0xc00242ff1f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:47:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8eaa70c-565b-4c32-a1ba-80fc5ac65afe\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-06-25 19:47:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-25 19:47:36 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0625 19:47:36.985899       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-85f479877b", timestamp:time.Time{wall:0xc0a5f771567d93f3, ext:38753471640, loc:(*time.Location)(0x77933c0)}}
I0625 19:47:36.986004       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-85f479877b" (107.799µs)
I0625 19:47:41.006219       1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-52trnh-control-plane-42w5n transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-25 19:47:01 +0000 UTC,LastTransitionTime:2022-06-25 19:46:35 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-25 19:47:36 +0000 UTC,LastTransitionTime:2022-06-25 19:47:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0625 19:47:41.006328       1 node_lifecycle_controller.go:1046] Node capz-52trnh-control-plane-42w5n ReadyCondition updated. Updating timestamp.
I0625 19:47:41.006358       1 node_lifecycle_controller.go:892] Node capz-52trnh-control-plane-42w5n is healthy again, removing all taints
I0625 19:47:41.006379       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0625 19:47:42.178312       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="136µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:44026" resp=200
I0625 19:47:45.128153       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-control-plane-42w5n"
I0625 19:47:45.134524       1 daemon_controller.go:570] Pod calico-node-vl289 updated.
... skipping 254 lines ...
I0625 19:48:35.516900       1 disruption.go:427] updatePod called on pod "kube-scheduler-capz-52trnh-control-plane-42w5n"
I0625 19:48:35.516973       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-scheduler-capz-52trnh-control-plane-42w5n, PodDisruptionBudget controller will avoid syncing.
I0625 19:48:35.516985       1 disruption.go:430] No matching pdb for pod "kube-scheduler-capz-52trnh-control-plane-42w5n"
I0625 19:48:36.005440       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0625 19:48:36.059669       1 pv_controller_base.go:556] resyncing PV controller
I0625 19:48:36.892822       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-mp-0000000"
W0625 19:48:36.892852       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-52trnh-mp-0000000" does not exist
I0625 19:48:36.892877       1 controller.go:697] Ignoring node capz-52trnh-mp-0000000 with Ready condition status False
I0625 19:48:36.892903       1 controller.go:272] Triggering nodeSync
I0625 19:48:36.893676       1 controller.go:291] nodeSync has been triggered
I0625 19:48:36.893846       1 controller.go:780] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0625 19:48:36.893990       1 controller.go:794] Finished updateLoadBalancerHosts
I0625 19:48:36.894132       1 controller.go:735] It took 0.0002896 seconds to finish nodeSyncInternal
... skipping 107 lines ...
I0625 19:48:40.084145       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a5f7860503e12f, ext:121460282848, loc:(*time.Location)(0x77933c0)}}
I0625 19:48:40.084229       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0625 19:48:40.084310       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0625 19:48:40.084363       1 daemon_controller.go:1112] Updating daemon set status
I0625 19:48:40.084471       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/kube-proxy" (1.444005ms)
I0625 19:48:40.441292       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-mp-0000001"
W0625 19:48:40.446569       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-52trnh-mp-0000001" does not exist
I0625 19:48:40.442809       1 controller.go:697] Ignoring node capz-52trnh-mp-0000000 with Ready condition status False
I0625 19:48:40.443392       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0a5f7860503e12f, ext:121460282848, loc:(*time.Location)(0x77933c0)}}
I0625 19:48:40.444477       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-52trnh-mp-0000001}
I0625 19:48:40.447212       1 taint_manager.go:441] "Updating known taints on node" node="capz-52trnh-mp-0000001" taints=[]
I0625 19:48:40.447373       1 controller.go:697] Ignoring node capz-52trnh-mp-0000001 with Ready condition status False
I0625 19:48:40.447528       1 controller.go:272] Triggering nodeSync
... skipping 433 lines ...
I0625 19:49:28.871704       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0625 19:49:28.871751       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0625 19:49:28.871777       1 daemon_controller.go:1112] Updating daemon set status
I0625 19:49:28.871844       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/calico-node" (1.551205ms)
I0625 19:49:30.960969       1 gc_controller.go:161] GC'ing orphaned
I0625 19:49:30.961003       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0625 19:49:31.028158       1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-52trnh-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-25 19:49:07 +0000 UTC,LastTransitionTime:2022-06-25 19:48:36 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-25 19:49:27 +0000 UTC,LastTransitionTime:2022-06-25 19:49:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0625 19:49:31.028280       1 node_lifecycle_controller.go:1046] Node capz-52trnh-mp-0000000 ReadyCondition updated. Updating timestamp.
I0625 19:49:31.050004       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-mp-0000000"
I0625 19:49:31.050600       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-52trnh-mp-0000000}
I0625 19:49:31.050630       1 taint_manager.go:441] "Updating known taints on node" node="capz-52trnh-mp-0000000" taints=[]
I0625 19:49:31.050651       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-52trnh-mp-0000000"
I0625 19:49:31.050996       1 node_lifecycle_controller.go:892] Node capz-52trnh-mp-0000000 is healthy again, removing all taints
... skipping 10 lines ...
I0625 19:49:31.239153       1 controller.go:735] It took 0.000502601 seconds to finish nodeSyncInternal
I0625 19:49:31.259016       1 controller_utils.go:221] "Made sure that node has no taint" node="capz-52trnh-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0625 19:49:31.260023       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-mp-0000001"
I0625 19:49:32.176662       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="95.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:45132" resp=200
I0625 19:49:36.008213       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0625 19:49:36.027125       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-mp-0000001"
I0625 19:49:36.051361       1 node_lifecycle_controller.go:1038] ReadyCondition for Node capz-52trnh-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-06-25 19:49:10 +0000 UTC,LastTransitionTime:2022-06-25 19:48:40 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-25 19:49:31 +0000 UTC,LastTransitionTime:2022-06-25 19:49:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0625 19:49:36.052585       1 node_lifecycle_controller.go:1046] Node capz-52trnh-mp-0000001 ReadyCondition updated. Updating timestamp.
I0625 19:49:36.062155       1 pv_controller_base.go:556] resyncing PV controller
I0625 19:49:36.062542       1 node_lifecycle_controller.go:892] Node capz-52trnh-mp-0000001 is healthy again, removing all taints
I0625 19:49:36.062586       1 node_lifecycle_controller.go:1213] Controller detected that zone westeurope::1 is now in state Normal.
I0625 19:49:36.064578       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-52trnh-mp-0000001"
I0625 19:49:36.064844       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-52trnh-mp-0000001}
... skipping 138 lines ...
I0625 19:49:43.008778       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azuredisk-controller-78b647c5f4" need=2 creating=2
I0625 19:49:43.009240       1 deployment_controller.go:215] "ReplicaSet added" replicaSet="kube-system/csi-azuredisk-controller-78b647c5f4"
I0625 19:49:43.014712       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azuredisk-controller-78b647c5f4 to 2"
I0625 19:49:43.039753       1 controller_utils.go:581] Controller csi-azuredisk-controller-78b647c5f4 created pod csi-azuredisk-controller-78b647c5f4-tp48g
I0625 19:49:43.040233       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-78b647c5f4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-78b647c5f4-tp48g"
I0625 19:49:43.042520       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-78b647c5f4-tp48g" podUID=5dcf74f1-0294-4575-9412-10319c577fc8
I0625 19:49:43.042550       1 replica_set.go:380] Pod csi-azuredisk-controller-78b647c5f4-tp48g created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-78b647c5f4-tp48g", GenerateName:"csi-azuredisk-controller-78b647c5f4-", Namespace:"kube-system", SelfLink:"", UID:"5dcf74f1-0294-4575-9412-10319c577fc8", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2022, time.June, 25, 19, 49, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"78b647c5f4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-78b647c5f4", UID:"bbf70c65-8e57-41e6-8b2c-ea32da0aa2fb", Controller:(*bool)(0xc0019a9797), BlockOwnerDeletion:(*bool)(0xc0019a9798)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 25, 19, 49, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000285968), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc000285980), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000285998), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-45zxd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00268b3c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-45zxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-45zxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-45zxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-45zxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-45zxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00268b4e0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-45zxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00189a800), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019a9bb0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00065dc70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019a9c20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019a9c40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0019a9c48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019a9c4c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001857bc0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0625 19:49:43.043071       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azuredisk-controller-78b647c5f4", timestamp:time.Time{wall:0xc0a5f795c084de22, ext:184384850231, loc:(*time.Location)(0x77933c0)}}
I0625 19:49:43.043126       1 disruption.go:415] addPod called on pod "csi-azuredisk-controller-78b647c5f4-tp48g"
I0625 19:49:43.043158       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azuredisk-controller-78b647c5f4-tp48g, PodDisruptionBudget controller will avoid syncing.
I0625 19:49:43.043174       1 disruption.go:418] No matching pdb for pod "csi-azuredisk-controller-78b647c5f4-tp48g"
I0625 19:49:43.043223       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-78b647c5f4-tp48g"
I0625 19:49:43.054040       1 controller_utils.go:581] Controller csi-azuredisk-controller-78b647c5f4 created pod csi-azuredisk-controller-78b647c5f4-vdc5n
I0625 19:49:43.054163       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azuredisk-controller-78b647c5f4, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0625 19:49:43.055103       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n" podUID=1db5c891-a725-4d89-ae93-d6dab2f0507d
I0625 19:49:43.055128       1 replica_set.go:380] Pod csi-azuredisk-controller-78b647c5f4-vdc5n created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-78b647c5f4-vdc5n", GenerateName:"csi-azuredisk-controller-78b647c5f4-", Namespace:"kube-system", SelfLink:"", UID:"1db5c891-a725-4d89-ae93-d6dab2f0507d", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2022, time.June, 25, 19, 49, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"78b647c5f4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-78b647c5f4", UID:"bbf70c65-8e57-41e6-8b2c-ea32da0aa2fb", Controller:(*bool)(0xc001bccc37), BlockOwnerDeletion:(*bool)(0xc001bccc38)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 25, 19, 49, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0024cbe00), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0024cbe18), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024cbe30), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-hnbk8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0005c1ac0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-hnbk8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-hnbk8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-hnbk8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-hnbk8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-hnbk8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0005c1e80)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-hnbk8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001b74ec0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001bcd470), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001805b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001bcd5e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001bcd600)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001bcd608), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001bcd60c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001979890), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0625 19:49:43.055876       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-78b647c5f4", timestamp:time.Time{wall:0xc0a5f795c084de22, ext:184384850231, loc:(*time.Location)(0x77933c0)}}
I0625 19:49:43.055949       1 disruption.go:415] addPod called on pod "csi-azuredisk-controller-78b647c5f4-vdc5n"
I0625 19:49:43.056005       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azuredisk-controller-78b647c5f4-vdc5n, PodDisruptionBudget controller will avoid syncing.
I0625 19:49:43.056015       1 disruption.go:418] No matching pdb for pod "csi-azuredisk-controller-78b647c5f4-vdc5n"
I0625 19:49:43.056088       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n"
I0625 19:49:43.056221       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-78b647c5f4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-78b647c5f4-vdc5n"
... skipping 11 lines ...
I0625 19:49:43.086851       1 disruption.go:427] updatePod called on pod "csi-azuredisk-controller-78b647c5f4-vdc5n"
I0625 19:49:43.086888       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azuredisk-controller-78b647c5f4-vdc5n, PodDisruptionBudget controller will avoid syncing.
I0625 19:49:43.086896       1 disruption.go:430] No matching pdb for pod "csi-azuredisk-controller-78b647c5f4-vdc5n"
I0625 19:49:43.086965       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n"
I0625 19:49:43.086963       1 replica_set.go:443] Pod csi-azuredisk-controller-78b647c5f4-vdc5n updated, objectMeta {Name:csi-azuredisk-controller-78b647c5f4-vdc5n GenerateName:csi-azuredisk-controller-78b647c5f4- Namespace:kube-system SelfLink: UID:1db5c891-a725-4d89-ae93-d6dab2f0507d ResourceVersion:1070 Generation:0 CreationTimestamp:2022-06-25 19:49:43 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:78b647c5f4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-78b647c5f4 UID:bbf70c65-8e57-41e6-8b2c-ea32da0aa2fb Controller:0xc001bccc37 BlockOwnerDeletion:0xc001bccc38}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:49:43 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbf70c65-8e57-41e6-8b2c-ea32da0aa2fb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azuredisk-controller-78b647c5f4-vdc5n GenerateName:csi-azuredisk-controller-78b647c5f4- Namespace:kube-system SelfLink: UID:1db5c891-a725-4d89-ae93-d6dab2f0507d ResourceVersion:1072 Generation:0 CreationTimestamp:2022-06-25 19:49:43 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:78b647c5f4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-78b647c5f4 UID:bbf70c65-8e57-41e6-8b2c-ea32da0aa2fb Controller:0xc0017e73a7 BlockOwnerDeletion:0xc0017e73a8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:49:43 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbf70c65-8e57-41e6-8b2c-ea32da0aa2fb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0625 19:49:43.093625       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azuredisk-controller" duration="100.573383ms"
I0625 19:49:43.093660       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0625 19:49:43.093727       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azuredisk-controller" startTime="2022-06-25 19:49:43.093711257 +0000 UTC m=+184.469853670"
I0625 19:49:43.095548       1 deployment_util.go:774] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-06-25 19:49:43 +0000 UTC - now: 2022-06-25 19:49:43.095539363 +0000 UTC m=+184.471681976]
I0625 19:49:43.107066       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/csi-azuredisk-controller-78b647c5f4"
I0625 19:49:43.108680       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/csi-azuredisk-controller-78b647c5f4" (24.854469ms)
I0625 19:49:43.108746       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-78b647c5f4", timestamp:time.Time{wall:0xc0a5f795c084de22, ext:184384850231, loc:(*time.Location)(0x77933c0)}}
I0625 19:49:43.108851       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/csi-azuredisk-controller-78b647c5f4" (113µs)
... skipping 62 lines ...
I0625 19:49:49.805454       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-667c64999f-q728s, PodDisruptionBudget controller will avoid syncing.
I0625 19:49:49.805463       1 disruption.go:418] No matching pdb for pod "csi-snapshot-controller-667c64999f-q728s"
I0625 19:49:49.805514       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-snapshot-controller-667c64999f-q728s"
I0625 19:49:49.805718       1 controller_utils.go:581] Controller csi-snapshot-controller-667c64999f created pod csi-snapshot-controller-667c64999f-q728s
I0625 19:49:49.806142       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-667c64999f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-667c64999f-q728s"
I0625 19:49:49.813341       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="40.773217ms"
I0625 19:49:49.813375       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0625 19:49:49.813462       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-06-25 19:49:49.813443096 +0000 UTC m=+191.189585609"
I0625 19:49:49.814083       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-06-25 19:49:49 +0000 UTC - now: 2022-06-25 19:49:49.814074097 +0000 UTC m=+191.190216510]
I0625 19:49:49.822163       1 controller_utils.go:581] Controller csi-snapshot-controller-667c64999f created pod csi-snapshot-controller-667c64999f-cb5nf
I0625 19:49:49.822215       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-667c64999f, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0625 19:49:49.822590       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-667c64999f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-667c64999f-cb5nf"
I0625 19:49:49.825460       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-667c64999f-cb5nf" podUID=1166e7c5-98c8-4c9e-ae07-2a2568349eba
... skipping 23 lines ...
I0625 19:49:49.850991       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-snapshot-controller-667c64999f-cb5nf"
I0625 19:49:49.861038       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/csi-snapshot-controller-667c64999f" (17.348949ms)
I0625 19:49:49.861074       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-snapshot-controller-667c64999f", timestamp:time.Time{wall:0xc0a5f7976f20485e, ext:191166788243, loc:(*time.Location)(0x77933c0)}}
I0625 19:49:49.861209       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-667c64999f, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0625 19:49:49.861244       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/csi-snapshot-controller-667c64999f"
I0625 19:49:49.870437       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="29.738485ms"
I0625 19:49:49.870471       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0625 19:49:49.870518       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-06-25 19:49:49.87050126 +0000 UTC m=+191.246643773"
I0625 19:49:49.876570       1 replica_set.go:443] Pod csi-snapshot-controller-667c64999f-cb5nf updated, objectMeta {Name:csi-snapshot-controller-667c64999f-cb5nf GenerateName:csi-snapshot-controller-667c64999f- Namespace:kube-system SelfLink: UID:1166e7c5-98c8-4c9e-ae07-2a2568349eba ResourceVersion:1148 Generation:0 CreationTimestamp:2022-06-25 19:49:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:667c64999f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-667c64999f UID:6f2eb147-da94-4554-930c-344a292bc950 Controller:0xc002013457 BlockOwnerDeletion:0xc002013458}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:49:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f2eb147-da94-4554-930c-344a292bc950\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-667c64999f-cb5nf GenerateName:csi-snapshot-controller-667c64999f- Namespace:kube-system SelfLink: UID:1166e7c5-98c8-4c9e-ae07-2a2568349eba ResourceVersion:1153 Generation:0 CreationTimestamp:2022-06-25 19:49:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:667c64999f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-667c64999f UID:6f2eb147-da94-4554-930c-344a292bc950 Controller:0xc002200c4e BlockOwnerDeletion:0xc002200c4f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:49:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f2eb147-da94-4554-930c-344a292bc950\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-25 19:49:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0625 19:49:49.885561       1 replica_set.go:443] Pod csi-snapshot-controller-667c64999f-q728s updated, objectMeta {Name:csi-snapshot-controller-667c64999f-q728s GenerateName:csi-snapshot-controller-667c64999f- Namespace:kube-system SelfLink: UID:78851c71-fbe7-4772-9a6e-3313b96dd6e6 ResourceVersion:1144 Generation:0 CreationTimestamp:2022-06-25 19:49:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:667c64999f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-667c64999f UID:6f2eb147-da94-4554-930c-344a292bc950 Controller:0xc001f973ae BlockOwnerDeletion:0xc001f973af}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:49:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f2eb147-da94-4554-930c-344a292bc950\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-667c64999f-q728s GenerateName:csi-snapshot-controller-667c64999f- Namespace:kube-system SelfLink: UID:78851c71-fbe7-4772-9a6e-3313b96dd6e6 ResourceVersion:1154 Generation:0 CreationTimestamp:2022-06-25 19:49:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:667c64999f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-667c64999f UID:6f2eb147-da94-4554-930c-344a292bc950 Controller:0xc002013e8e BlockOwnerDeletion:0xc002013e8f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 19:49:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f2eb147-da94-4554-930c-344a292bc950\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-25 19:49:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0625 19:49:49.885882       1 disruption.go:427] updatePod called on pod "csi-snapshot-controller-667c64999f-cb5nf"
I0625 19:49:49.885916       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-667c64999f-cb5nf, PodDisruptionBudget controller will avoid syncing.
I0625 19:49:49.885924       1 disruption.go:430] No matching pdb for pod "csi-snapshot-controller-667c64999f-cb5nf"
... skipping 313 lines ...
I0625 19:51:46.058450       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (4µs)
I0625 19:51:46.067482       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (217.921157ms)
I0625 19:51:46.097092       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (4.276613ms)
I0625 19:51:46.102126       1 publisher.go:186] Finished syncing namespace "azuredisk-1353" (8.938527ms)
I0625 19:51:46.888761       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0625 19:51:46.954559       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-thlpj, uid 22e8ba4e-da47-4807-9105-9baa710fc7de, event type delete
E0625 19:51:46.977610       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-55phc" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0625 19:51:46.983268       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid 10bf8067-899a-420c-a574-2d4414dedee7, event type delete
I0625 19:51:46.985122       1 publisher.go:186] Finished syncing namespace "azuredisk-4728" (2.098006ms)
I0625 19:51:46.991730       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0625 19:51:46.991799       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (3µs)
I0625 19:51:46.991840       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid b7278f90-dcbc-47c8-90a8-78f25fe5d66d, event type delete
I0625 19:51:47.085256       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4728, estimate: 0, errors: <nil>
... skipping 33 lines ...
I0625 19:51:47.910944       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5466
I0625 19:51:47.935114       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5466, name kube-root-ca.crt, uid d5534251-14b5-475b-b3d4-5a6a96c391b5, event type delete
I0625 19:51:47.938304       1 publisher.go:186] Finished syncing namespace "azuredisk-5466" (3.42821ms)
I0625 19:51:47.990838       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-jn7t6, uid 54c7e903-9f96-43e5-a2de-205eed68b349, event type delete
I0625 19:51:48.004787       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (4.2µs)
I0625 19:51:48.005194       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5466, name default, uid 1bab7fb0-4df8-4914-a8e9-81efc729f239, event type delete
E0625 19:51:48.008816       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-wgkcd" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0625 19:51:48.008865       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5466/default), service account deleted, removing tokens
I0625 19:51:48.088277       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5466, estimate: 0, errors: <nil>
I0625 19:51:48.089163       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (4.1µs)
I0625 19:51:48.107570       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5466" (200.849507ms)
I0625 19:51:48.934371       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0625 19:51:49.040209       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
... skipping 6 lines ...
I0625 19:51:49.092063       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2790, estimate: 0, errors: <nil>
I0625 19:51:49.101798       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2790" (172.10102ms)
I0625 19:51:49.962640       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0625 19:51:49.976343       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8081
I0625 19:51:49.976588       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (269.501µs)
I0625 19:51:49.984972       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-hjm2r, uid e6b7beee-e3ad-459e-88cf-0028c807d885, event type delete
E0625 19:51:49.998221       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-mzf2q" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0625 19:51:50.003805       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0625 19:51:50.003923       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (3.3µs)
I0625 19:51:50.004073       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid cb89374a-0606-4be0-8b66-8b3b38dd91fb, event type delete
I0625 19:51:50.032373       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid ee81a8df-9594-4280-87c5-eacab99bab87, event type delete
I0625 19:51:50.034799       1 publisher.go:186] Finished syncing namespace "azuredisk-5356" (2.434907ms)
I0625 19:51:50.131820       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (4.1µs)
... skipping 110 lines ...
I0625 19:51:51.072259       1 pv_controller.go:759] updating PersistentVolumeClaim[azuredisk-1353/pvc-hrcml] status: set phase Bound
I0625 19:51:51.072279       1 pv_controller.go:817] updating PersistentVolumeClaim[azuredisk-1353/pvc-hrcml] status: phase Bound already set
I0625 19:51:51.072290       1 pv_controller.go:1046] volume "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0" bound to claim "azuredisk-1353/pvc-hrcml"
I0625 19:51:51.072306       1 pv_controller.go:1047] volume "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0" status after binding: phase: Bound, bound to: "azuredisk-1353/pvc-hrcml (uid: e7052a95-8e23-4150-a038-c7b1a8bcf0f0)", boundByController: false
I0625 19:51:51.072321       1 pv_controller.go:1048] claim "azuredisk-1353/pvc-hrcml" status after binding: phase: Bound, bound to: "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0", bindCompleted: true, boundByController: true
I0625 19:51:51.133538       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-fgzlw, uid 17691930-1dbd-4a36-9c53-0f43a4c346ba, event type delete
E0625 19:51:51.165232       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-wnshs" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0625 19:51:51.186682       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0625 19:51:51.186756       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (4.5µs)
I0625 19:51:51.186798       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid 1c5ae676-f711-464d-a261-43802feb8986, event type delete
I0625 19:51:51.225458       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5194, name kube-root-ca.crt, uid 716c08a6-41af-4513-bb0f-d0ed0a18d8ac, event type delete
I0625 19:51:51.231697       1 publisher.go:186] Finished syncing namespace "azuredisk-5194" (6.485618ms)
I0625 19:51:51.282604       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (3.9µs)
... skipping 193 lines ...
I0625 19:52:54.187624       1 pv_protection_controller.go:121] Processing PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0
I0625 19:52:54.196652       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0" with version 1821
I0625 19:52:54.196964       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0]: phase: Released, bound to: "azuredisk-1353/pvc-hrcml (uid: e7052a95-8e23-4150-a038-c7b1a8bcf0f0)", boundByController: false
I0625 19:52:54.197290       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0]: volume is bound to claim azuredisk-1353/pvc-hrcml
I0625 19:52:54.197469       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0]: claim azuredisk-1353/pvc-hrcml not found
I0625 19:52:54.197323       1 pv_protection_controller.go:198] Got event on PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0
I0625 19:52:54.208040       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0: Operation cannot be fulfilled on persistentvolumes "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0": the object has been modified; please apply your changes to the latest version and try again
I0625 19:52:54.208245       1 pv_protection_controller.go:124] Finished processing PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0 (20.603273ms)
E0625 19:52:54.208271       1 pv_protection_controller.go:114] PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0": the object has been modified; please apply your changes to the latest version and try again
I0625 19:52:54.208374       1 pv_protection_controller.go:121] Processing PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0
I0625 19:52:54.213287       1 pv_controller_base.go:237] volume "pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0" deleted
I0625 19:52:54.213405       1 pv_controller_base.go:533] deletion of claim "azuredisk-1353/pvc-hrcml" was already processed
I0625 19:52:54.213293       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0
I0625 19:52:54.213502       1 pv_protection_controller.go:124] Finished processing PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0 (5.001117ms)
I0625 19:52:54.213562       1 pv_protection_controller.go:121] Processing PV pvc-e7052a95-8e23-4150-a038-c7b1a8bcf0f0
... skipping 1234 lines ...
I0625 19:58:00.386174       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-486e1f30-959f-4d0c-9ea1-eb759256f695]: claim azuredisk-9241/pvc-j595t not found
I0625 19:58:00.393085       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-486e1f30-959f-4d0c-9ea1-eb759256f695" with version 2812
I0625 19:58:00.393256       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-486e1f30-959f-4d0c-9ea1-eb759256f695]: phase: Released, bound to: "azuredisk-9241/pvc-j595t (uid: 486e1f30-959f-4d0c-9ea1-eb759256f695)", boundByController: false
I0625 19:58:00.393356       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-486e1f30-959f-4d0c-9ea1-eb759256f695]: volume is bound to claim azuredisk-9241/pvc-j595t
I0625 19:58:00.393418       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-486e1f30-959f-4d0c-9ea1-eb759256f695]: claim azuredisk-9241/pvc-j595t not found
I0625 19:58:00.393476       1 pv_protection_controller.go:198] Got event on PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695
I0625 19:58:00.396051       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695: Operation cannot be fulfilled on persistentvolumes "pvc-486e1f30-959f-4d0c-9ea1-eb759256f695": the object has been modified; please apply your changes to the latest version and try again
I0625 19:58:00.396072       1 pv_protection_controller.go:124] Finished processing PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695 (10.483533ms)
E0625 19:58:00.396088       1 pv_protection_controller.go:114] PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-486e1f30-959f-4d0c-9ea1-eb759256f695": the object has been modified; please apply your changes to the latest version and try again
I0625 19:58:00.396160       1 pv_protection_controller.go:121] Processing PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695
I0625 19:58:00.401124       1 pv_controller_base.go:237] volume "pvc-486e1f30-959f-4d0c-9ea1-eb759256f695" deleted
I0625 19:58:00.401606       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695
I0625 19:58:00.401628       1 pv_protection_controller.go:124] Finished processing PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695 (5.345516ms)
I0625 19:58:00.401695       1 pv_controller_base.go:533] deletion of claim "azuredisk-9241/pvc-j595t" was already processed
I0625 19:58:00.401802       1 pv_protection_controller.go:121] Processing PV pvc-486e1f30-959f-4d0c-9ea1-eb759256f695
... skipping 121 lines ...
I0625 19:58:10.672532       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-hh2d2.16fbf55690988171, uid 16281a82-462c-4861-bf6f-e7a71adb3cbe, event type delete
I0625 19:58:10.676464       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-j595t.16fbf5525d7c1573, uid b26d69e4-a388-42fa-96de-ec03b63d5610, event type delete
I0625 19:58:10.679566       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-j595t.16fbf552653c1827, uid 8bf02d4d-8a47-4962-8e80-09bc68d28493, event type delete
I0625 19:58:10.683733       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-j595t.16fbf552654d3449, uid eb416bc3-724c-44b0-be1a-1a2817f203f1, event type delete
I0625 19:58:10.687178       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-j595t.16fbf552ff01edc2, uid f15687c9-ace4-42ef-bd7f-629886a37383, event type delete
I0625 19:58:10.733712       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9241, name default-token-wbz68, uid c8f21b45-c959-45c8-80de-3f893811032c, event type delete
E0625 19:58:10.746709       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9241/default: secrets "default-token-khfkp" is forbidden: unable to create new content in namespace azuredisk-9241 because it is being terminated
I0625 19:58:10.760644       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9241/default), service account deleted, removing tokens
I0625 19:58:10.760857       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9241" (3.9µs)
I0625 19:58:10.760891       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9241, name default, uid ddc57eda-c1e7-4fe0-b9b0-bb693395d3ea, event type delete
I0625 19:58:10.769674       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9241, estimate: 0, errors: <nil>
I0625 19:58:10.770209       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9241" (2.7µs)
I0625 19:58:10.779907       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9241" (196.004611ms)
... skipping 913 lines ...
I0625 20:00:22.482416       1 pv_protection_controller.go:121] Processing PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb
I0625 20:00:22.487995       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb" with version 3297
I0625 20:00:22.488219       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb]: phase: Released, bound to: "azuredisk-9336/pvc-nmksx (uid: abf077b7-5263-4e7b-a2e9-1f95578002eb)", boundByController: false
I0625 20:00:22.488324       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb]: volume is bound to claim azuredisk-9336/pvc-nmksx
I0625 20:00:22.488407       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb]: claim azuredisk-9336/pvc-nmksx not found
I0625 20:00:22.488501       1 pv_protection_controller.go:198] Got event on PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb
I0625 20:00:22.490761       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb: Operation cannot be fulfilled on persistentvolumes "pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb": the object has been modified; please apply your changes to the latest version and try again
I0625 20:00:22.490822       1 pv_protection_controller.go:124] Finished processing PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb (8.306619ms)
E0625 20:00:22.490836       1 pv_protection_controller.go:114] PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb failed with : Operation cannot be fulfilled on persistentvolumes "pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb": the object has been modified; please apply your changes to the latest version and try again
I0625 20:00:22.490926       1 pv_protection_controller.go:121] Processing PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb
I0625 20:00:22.495952       1 pv_controller_base.go:237] volume "pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb" deleted
I0625 20:00:22.496000       1 pv_controller_base.go:533] deletion of claim "azuredisk-9336/pvc-nmksx" was already processed
I0625 20:00:22.496187       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb
I0625 20:00:22.496267       1 pv_protection_controller.go:124] Finished processing PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb (5.079412ms)
I0625 20:00:22.496288       1 pv_protection_controller.go:121] Processing PV pvc-abf077b7-5263-4e7b-a2e9-1f95578002eb
... skipping 531 lines ...
I0625 20:02:47.602971       1 pv_protection_controller.go:121] Processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237
I0625 20:02:47.610638       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237" with version 3699
I0625 20:02:47.610669       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237]: phase: Released, bound to: "azuredisk-9336/pvc-djn9f (uid: b9dccb09-1a30-4c61-85ed-5bd95aab7237)", boundByController: false
I0625 20:02:47.610692       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237]: volume is bound to claim azuredisk-9336/pvc-djn9f
I0625 20:02:47.610703       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237]: claim azuredisk-9336/pvc-djn9f not found
I0625 20:02:47.610722       1 pv_protection_controller.go:198] Got event on PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237
I0625 20:02:47.614440       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237: Operation cannot be fulfilled on persistentvolumes "pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237": the object has been modified; please apply your changes to the latest version and try again
I0625 20:02:47.614625       1 pv_protection_controller.go:124] Finished processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 (11.625934ms)
E0625 20:02:47.614647       1 pv_protection_controller.go:114] PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237": the object has been modified; please apply your changes to the latest version and try again
I0625 20:02:47.614757       1 pv_protection_controller.go:121] Processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237
I0625 20:02:47.618878       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237
I0625 20:02:47.619095       1 pv_protection_controller.go:124] Finished processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 (4.318913ms)
I0625 20:02:47.620021       1 pv_protection_controller.go:121] Processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237
I0625 20:02:47.620509       1 pv_controller_base.go:237] volume "pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237" deleted
I0625 20:02:47.620870       1 pv_controller_base.go:533] deletion of claim "azuredisk-9336/pvc-djn9f" was already processed
I0625 20:02:47.622232       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237: Operation cannot be fulfilled on persistentvolumes "pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9ae7d59e-531f-4af9-bb6e-d6c18144e088, UID in object meta: 
I0625 20:02:47.622262       1 pv_protection_controller.go:124] Finished processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 (2.218607ms)
E0625 20:02:47.622274       1 pv_protection_controller.go:114] PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9ae7d59e-531f-4af9-bb6e-d6c18144e088, UID in object meta: 
I0625 20:02:47.627751       1 pv_protection_controller.go:121] Processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237
I0625 20:02:47.627784       1 pv_protection_controller.go:129] PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 not found, ignoring
I0625 20:02:47.627793       1 pv_protection_controller.go:124] Finished processing PV pvc-b9dccb09-1a30-4c61-85ed-5bd95aab7237 (18.6µs)
I0625 20:02:50.883964       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0625 20:02:50.989950       1 gc_controller.go:161] GC'ing orphaned
I0625 20:02:50.989981       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 47 lines ...
I0625 20:02:53.772432       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azuredisk-2205/pvc-x6gwz"
I0625 20:02:53.774917       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc" (7.443322ms)
I0625 20:02:53.775207       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc", timestamp:time.Time{wall:0xc0a5f85b6c8beba0, ext:975123510241, loc:(*time.Location)(0x77933c0)}}
I0625 20:02:53.775309       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc" (106.1µs)
I0625 20:02:53.775363       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc"
I0625 20:02:53.778506       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-lc2bk" duration="35.480805ms"
I0625 20:02:53.778715       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-lc2bk" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-lc2bk\": the object has been modified; please apply your changes to the latest version and try again"
I0625 20:02:53.778893       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-lc2bk" startTime="2022-06-25 20:02:53.778869021 +0000 UTC m=+975.155011434"
I0625 20:02:53.779921       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azuredisk-2205/pvc-x6gwz" with version 3735
I0625 20:02:53.780287       1 pv_controller.go:251] synchronizing PersistentVolumeClaim[azuredisk-2205/pvc-x6gwz]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0625 20:02:53.780496       1 pv_controller.go:346] synchronizing unbound PersistentVolumeClaim[azuredisk-2205/pvc-x6gwz]: no volume found
I0625 20:02:53.780651       1 pv_controller.go:1453] provisionClaim[azuredisk-2205/pvc-x6gwz]: started
I0625 20:02:53.780793       1 pv_controller.go:1762] scheduleOperation[provision-azuredisk-2205/pvc-x6gwz[91f2c987-b4b8-451e-9a41-044dfe1dc81c]]
... skipping 393 lines ...
I0625 20:04:52.175898       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="104.201µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54458" resp=200
I0625 20:04:56.000819       1 secrets.go:73] Expired bootstrap token in kube-system/bootstrap-token-41rac5 Secret: 2022-06-25T20:04:56Z
I0625 20:04:56.000848       1 tokencleaner.go:194] Deleting expired secret kube-system/bootstrap-token-41rac5
I0625 20:04:56.007036       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-41rac5, uid 70dabbff-a38d-4a5c-a38d-71a3a7931822, event type delete
I0625 20:04:56.007190       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-41rac5" (6.396818ms)
E0625 20:04:56.848643       1 csi_attacher.go:511] kubernetes.io/csi: Attach timeout after 2m0s [volume=/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c; attachment.ID=csi-f64ae13e8afa84039e8233432cb7dd78663514ee1ca4b73af8862ed413ffff60]
I0625 20:04:56.849227       1 event.go:294] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc-98rxv" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.Attach failed for volume \"pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c\" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c"
E0625 20:04:56.849333       1 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c podName: nodeName:}" failed. No retries permitted until 2022-06-25 20:04:57.349260415 +0000 UTC m=+1098.725403028 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c") from node "capz-52trnh-mp-0000001" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c
I0625 20:04:57.355858       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c") from node "capz-52trnh-mp-0000001" 
I0625 20:04:57.355945       1 csi_attacher.go:178] kubernetes.io/csi: probing VolumeAttachment [id=csi-f64ae13e8afa84039e8233432cb7dd78663514ee1ca4b73af8862ed413ffff60]
I0625 20:04:57.860131       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 0 items received
I0625 20:05:02.176679       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="99.901µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:54554" resp=200
I0625 20:05:06.065395       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0625 20:05:06.117139       1 pv_controller_base.go:556] resyncing PV controller
... skipping 250 lines ...
I0625 20:06:51.122291       1 pv_controller.go:1048] claim "azuredisk-2205/pvc-x6gwz" status after binding: phase: Bound, bound to: "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c", bindCompleted: true, boundByController: true
I0625 20:06:52.117105       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0625 20:06:52.176696       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="85.4µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:55660" resp=200
I0625 20:06:54.295162       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 8 items received
E0625 20:06:57.356546       1 csi_attacher.go:511] kubernetes.io/csi: Attach timeout after 2m0s [volume=/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c; attachment.ID=csi-f64ae13e8afa84039e8233432cb7dd78663514ee1ca4b73af8862ed413ffff60]
I0625 20:06:57.356575       1 actual_state_of_world.go:355] Volume "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" is already added to attachedVolume list to node "capz-52trnh-mp-0000001", update device path ""
E0625 20:06:57.356682       1 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c podName: nodeName:}" failed. No retries permitted until 2022-06-25 20:06:58.356651777 +0000 UTC m=+1219.732794390 (durationBeforeRetry 1s). Error: AttachVolume.Attach failed for volume "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c") from node "capz-52trnh-mp-0000001" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c
I0625 20:06:57.356751       1 event.go:294] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc-98rxv" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.Attach failed for volume \"pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c\" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c"
I0625 20:06:58.075187       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0625 20:06:58.380470       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c") from node "capz-52trnh-mp-0000001" 
I0625 20:06:58.380576       1 csi_attacher.go:178] kubernetes.io/csi: probing VolumeAttachment [id=csi-f64ae13e8afa84039e8233432cb7dd78663514ee1ca4b73af8862ed413ffff60]
I0625 20:07:02.175807       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="116.6µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:55756" resp=200
I0625 20:07:06.071865       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0625 20:07:06.121943       1 pv_controller_base.go:556] resyncing PV controller
... skipping 298 lines ...
I0625 20:08:51.129181       1 pv_controller.go:866] updating PersistentVolume[pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c]: set phase Bound
I0625 20:08:51.129191       1 pv_controller.go:869] updating PersistentVolume[pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c]: phase Bound already set
I0625 20:08:52.182509       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0625 20:08:52.183276       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="84.7µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56860" resp=200
E0625 20:08:58.380751       1 csi_attacher.go:511] kubernetes.io/csi: Attach timeout after 2m0s [volume=/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c; attachment.ID=csi-f64ae13e8afa84039e8233432cb7dd78663514ee1ca4b73af8862ed413ffff60]
I0625 20:08:58.380789       1 actual_state_of_world.go:355] Volume "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" is already added to attachedVolume list to node "capz-52trnh-mp-0000001", update device path ""
E0625 20:08:58.380932       1 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c podName: nodeName:}" failed. No retries permitted until 2022-06-25 20:09:00.380870192 +0000 UTC m=+1341.757012805 (durationBeforeRetry 2s). Error: AttachVolume.Attach failed for volume "pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c") from node "capz-52trnh-mp-0000001" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c
I0625 20:08:58.381318       1 event.go:294] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-lc2bk-754c97cc-98rxv" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.Attach failed for volume \"pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c\" : Attach timeout for volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c"
I0625 20:08:58.406712       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-91f2c987-b4b8-451e-9a41-044dfe1dc81c on node "capz-52trnh-mp-0000001"
I0625 20:08:59.872926       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 2 items received
I0625 20:09:00.126280       1 reflector.go:536] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0625 20:09:02.176023       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="94.801µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:56954" resp=200
I0625 20:09:06.075492       1 reflector.go:382] k8s.io/client-go/informers/factory.go:134: forcing resync
I0625 20:09:06.128850       1 pv_controller_base.go:556] resyncing PV controller
... skipping 381 lines ...
I0625 20:11:02.176410       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="104.601µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:58170" resp=200
I0625 20:11:02.635781       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (4.6µs)
I0625 20:11:02.768139       1 publisher.go:186] Finished syncing namespace "azuredisk-1387" (12.348367ms)
I0625 20:11:02.782495       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (27.065847ms)
I0625 20:11:03.278325       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0625 20:11:03.301591       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-cbmw6, uid f5534762-8959-4b2c-9169-be0eefbe4d2a, event type delete
E0625 20:11:03.318195       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-wngl6" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0625 20:11:03.346681       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-m9tqm.16fbf612ad04ecca, uid 3bed75b6-b352-4d43-bf97-92dd6a315e4c, event type delete
I0625 20:11:03.350741       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-m9tqm.16fbf612ad14b2b3, uid 92221233-d46f-468d-8c75-6b87a02f6511, event type delete
I0625 20:11:03.354880       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-m9tqm.16fbf61345f22796, uid a6c9a95a-13d5-4294-8880-67ea940f222a, event type delete
I0625 20:11:03.386092       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 7df1a490-d1a6-4220-86e7-10bbb3f45d33, event type delete
I0625 20:11:03.387355       1 publisher.go:186] Finished syncing namespace "azuredisk-3410" (1.517508ms)
I0625 20:11:03.392269       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
... skipping 345 lines ...
I0625 20:11:07.722046       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-kxs6v"
I0625 20:11:07.754240       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid 2ac742d1-aa09-4821-a339-95f980789e5b, event type delete
I0625 20:11:07.756467       1 publisher.go:186] Finished syncing namespace "azuredisk-3086" (2.507917ms)
I0625 20:11:07.810568       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-kzq9b, uid db55ace6-158c-4469-94bd-d41d2723de2a, event type delete
I0625 20:11:07.821129       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (4.2µs)
I0625 20:11:07.821232       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 11fe036b-084f-4310-94e3-1d32da829eed, event type delete
E0625 20:11:07.826052       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-qnwjn" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0625 20:11:07.826173       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0625 20:11:07.846953       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2.7µs)
I0625 20:11:07.847758       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0625 20:11:07.856860       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (220.874312ms)
I0625 20:11:08.453731       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3410
I0625 20:11:08.453760       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3410" (52.2µs)
... skipping 382 lines ...
I0625 20:11:57.383575       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-26cf613b-b961-46dc-9e51-95834e77d7d7]: claim azuredisk-1387/pvc-q4frw not found
I0625 20:11:57.390786       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-26cf613b-b961-46dc-9e51-95834e77d7d7" with version 5306
I0625 20:11:57.390898       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-26cf613b-b961-46dc-9e51-95834e77d7d7]: phase: Released, bound to: "azuredisk-1387/pvc-q4frw (uid: 26cf613b-b961-46dc-9e51-95834e77d7d7)", boundByController: false
I0625 20:11:57.390963       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-26cf613b-b961-46dc-9e51-95834e77d7d7]: volume is bound to claim azuredisk-1387/pvc-q4frw
I0625 20:11:57.391002       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-26cf613b-b961-46dc-9e51-95834e77d7d7]: claim azuredisk-1387/pvc-q4frw not found
I0625 20:11:57.391044       1 pv_protection_controller.go:198] Got event on PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7
I0625 20:11:57.393604       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7: Operation cannot be fulfilled on persistentvolumes "pvc-26cf613b-b961-46dc-9e51-95834e77d7d7": the object has been modified; please apply your changes to the latest version and try again
I0625 20:11:57.393627       1 pv_protection_controller.go:124] Finished processing PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7 (10.074734ms)
E0625 20:11:57.393641       1 pv_protection_controller.go:114] PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-26cf613b-b961-46dc-9e51-95834e77d7d7": the object has been modified; please apply your changes to the latest version and try again
I0625 20:11:57.393812       1 pv_protection_controller.go:121] Processing PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7
I0625 20:11:57.398599       1 pv_controller_base.go:237] volume "pvc-26cf613b-b961-46dc-9e51-95834e77d7d7" deleted
I0625 20:11:57.398668       1 pv_controller_base.go:533] deletion of claim "azuredisk-1387/pvc-q4frw" was already processed
I0625 20:11:57.398997       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7
I0625 20:11:57.399014       1 pv_protection_controller.go:124] Finished processing PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7 (5.182017ms)
I0625 20:11:57.399033       1 pv_protection_controller.go:121] Processing PV pvc-26cf613b-b961-46dc-9e51-95834e77d7d7
... skipping 86 lines ...
I0625 20:12:18.404694       1 pv_protection_controller.go:121] Processing PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6
I0625 20:12:18.410592       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6" with version 5371
I0625 20:12:18.410623       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6]: phase: Released, bound to: "azuredisk-1387/pvc-2dbmq (uid: a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6)", boundByController: false
I0625 20:12:18.410586       1 pv_protection_controller.go:198] Got event on PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6
I0625 20:12:18.410864       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6]: volume is bound to claim azuredisk-1387/pvc-2dbmq
I0625 20:12:18.410882       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6]: claim azuredisk-1387/pvc-2dbmq not found
I0625 20:12:18.415163       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6: Operation cannot be fulfilled on persistentvolumes "pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6": the object has been modified; please apply your changes to the latest version and try again
I0625 20:12:18.415184       1 pv_protection_controller.go:124] Finished processing PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6 (10.427439ms)
E0625 20:12:18.415199       1 pv_protection_controller.go:114] PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6": the object has been modified; please apply your changes to the latest version and try again
I0625 20:12:18.415284       1 pv_protection_controller.go:121] Processing PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6
I0625 20:12:18.433620       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6
I0625 20:12:18.433876       1 pv_protection_controller.go:124] Finished processing PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6 (18.535569ms)
I0625 20:12:18.434028       1 pv_protection_controller.go:121] Processing PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6
I0625 20:12:18.434162       1 pv_protection_controller.go:129] PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6 not found, ignoring
I0625 20:12:18.434279       1 pv_protection_controller.go:124] Finished processing PV pvc-a6c764ec-b18f-4efd-b0d2-0db3c22f2ec6 (126.401µs)
... skipping 55 lines ...
I0625 20:12:24.565503       1 pv_protection_controller.go:121] Processing PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea
I0625 20:12:24.571996       1 pv_controller_base.go:670] storeObjectUpdate updating volume "pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea" with version 5398
I0625 20:12:24.572237       1 pv_controller.go:539] synchronizing PersistentVolume[pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea]: phase: Released, bound to: "azuredisk-1387/pvc-mtzz7 (uid: bb6d173f-8656-4e6d-8738-ce8c376568ea)", boundByController: false
I0625 20:12:24.572373       1 pv_controller.go:573] synchronizing PersistentVolume[pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea]: volume is bound to claim azuredisk-1387/pvc-mtzz7
I0625 20:12:24.572424       1 pv_controller.go:607] synchronizing PersistentVolume[pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea]: claim azuredisk-1387/pvc-mtzz7 not found
I0625 20:12:24.572201       1 pv_protection_controller.go:198] Got event on PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea
I0625 20:12:24.574535       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea: Operation cannot be fulfilled on persistentvolumes "pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea": the object has been modified; please apply your changes to the latest version and try again
I0625 20:12:24.574557       1 pv_protection_controller.go:124] Finished processing PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea (9.035333ms)
E0625 20:12:24.574571       1 pv_protection_controller.go:114] PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea failed with : Operation cannot be fulfilled on persistentvolumes "pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea": the object has been modified; please apply your changes to the latest version and try again
I0625 20:12:24.574617       1 pv_protection_controller.go:121] Processing PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea
I0625 20:12:24.579212       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea
I0625 20:12:24.579408       1 pv_protection_controller.go:124] Finished processing PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea (4.771817ms)
I0625 20:12:24.579258       1 pv_controller_base.go:237] volume "pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea" deleted
I0625 20:12:24.579474       1 pv_controller_base.go:533] deletion of claim "azuredisk-1387/pvc-mtzz7" was already processed
I0625 20:12:24.580359       1 pv_protection_controller.go:121] Processing PV pvc-bb6d173f-8656-4e6d-8738-ce8c376568ea
... skipping 230 lines ...
I0625 20:12:35.124997       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-mtzz7.16fbf61720c511ae, uid 8e156ff7-63e5-484b-99e6-9feaf25f5451, event type delete
I0625 20:12:35.128816       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-q4frw.16fbf6168264e01c, uid 43844a65-5727-4093-b6b4-98d55b6b4179, event type delete
I0625 20:12:35.133509       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-q4frw.16fbf6168b75c9f3, uid a7c28853-5953-4a20-825c-c8cae07055b4, event type delete
I0625 20:12:35.136092       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-q4frw.16fbf6168b77c023, uid c0dec8e7-d6bd-438c-93c3-a03b4bd1cb06, event type delete
I0625 20:12:35.139955       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-q4frw.16fbf61722f07e05, uid 53237ed6-c699-4895-b662-d5acdd1ea649, event type delete
I0625 20:12:35.150915       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-4wszt, uid 4a56d248-1fad-46e8-b476-f59eb79e4a4a, event type delete
E0625 20:12:35.165990       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-bbdmd" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0625 20:12:35.166313       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0625 20:12:35.166223       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (4µs)
I0625 20:12:35.166255       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid 364de7fc-427f-459f-831b-41a4540c4e1a, event type delete
I0625 20:12:35.227362       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (5.3µs)
I0625 20:12:35.229568       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1387, estimate: 0, errors: <nil>
I0625 20:12:35.239154       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1387" (264.473723ms)
... skipping 1501 lines ...
I0625 20:15:44.625547       1 pvc_protection_controller.go:331] "Got event on PVC" pvc="azuredisk-8666/pvc-6spjn"
I0625 20:15:44.625906       1 pv_controller_base.go:670] storeObjectUpdate updating claim "azuredisk-8666/pvc-6spjn" with version 6229
I0625 20:15:44.626059       1 pv_controller.go:1725] provisionClaimOperationExternal provisioning claim "azuredisk-8666/pvc-6spjn": waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I0625 20:15:44.626201       1 event.go:294] "Event occurred" object="azuredisk-8666/pvc-6spjn" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0625 20:15:45.115828       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4657
I0625 20:15:45.167432       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4657, name default-token-b5v5f, uid 341f58fe-ae26-42af-8b70-6b377c41b110, event type delete
E0625 20:15:45.180568       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4657/default: secrets "default-token-2j2c7" is forbidden: unable to create new content in namespace azuredisk-4657 because it is being terminated
I0625 20:15:45.191612       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4657, name kube-root-ca.crt, uid 13dacabe-3993-4092-a608-3c9530157cfb, event type delete
I0625 20:15:45.194036       1 publisher.go:186] Finished syncing namespace "azuredisk-4657" (2.51971ms)
I0625 20:15:45.246824       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4657/default), service account deleted, removing tokens
I0625 20:15:45.247916       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4657" (3.4µs)
I0625 20:15:45.247983       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4657, name default, uid fcfd5b03-2b78-46f6-96f2-ae529c8ee8e5, event type delete
I0625 20:15:45.265551       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4657" (3.9µs)
... skipping 87 lines ...
I0625 20:15:47.730931       1 pv_controller.go:866] updating PersistentVolume[pvc-652c29f0-9ab0-4c76-9ec0-0687ac5d6253]: set phase Bound
I0625 20:15:47.731049       1 pv_controller.go:869] updating PersistentVolume[pvc-652c29f0-9ab0-4c76-9ec0-0687ac5d6253]: phase Bound already set
I0625 20:15:48.046386       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-565
I0625 20:15:48.071275       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-565, name kube-root-ca.crt, uid fad213c4-8847-4474-95fd-03ce6d128877, event type delete
I0625 20:15:48.073920       1 publisher.go:186] Finished syncing namespace "azuredisk-565" (2.75701ms)
I0625 20:15:48.086820       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-565, name default-token-lq9mp, uid 355ef4ba-2ff8-42ac-ba6e-604f4633a91d, event type delete
E0625 20:15:48.100953       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-565/default: secrets "default-token-kk5rt" is forbidden: unable to create new content in namespace azuredisk-565 because it is being terminated
I0625 20:15:48.124074       1 tokens_controller.go:252] syncServiceAccount(azuredisk-565/default), service account deleted, removing tokens
I0625 20:15:48.124405       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (4.2µs)
I0625 20:15:48.124710       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-565, name default, uid 0e63783d-a83a-4231-ae70-e3a6f9437176, event type delete
I0625 20:15:48.198582       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (3.9µs)
I0625 20:15:48.200642       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-565, estimate: 0, errors: <nil>
I0625 20:15:48.210747       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-565" (167.854045ms)
... skipping 470 lines ...
I0625 20:17:22.458433       1 pv_controller.go:869] updating PersistentVolume[pvc-3a0f8b11-070d-4a72-a6a4-e7b05ac31b22]: phase Bound already set
I0625 20:17:22.458457       1 pv_protection_controller.go:198] Got event on PV pvc-3a0f8b11-070d-4a72-a6a4-e7b05ac31b22
I0625 20:17:22.506399       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0625 20:17:22.866826       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8666
I0625 20:17:22.893264       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8666, name default-token-76wrz, uid c271c396-6c78-4850-918e-e7447361e420, event type delete
I0625 20:17:22.905074       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-5xp2b.16fbf6586c62fba3, uid 0868368d-af50-4fc2-ad55-3d23e3eec47c, event type delete
E0625 20:17:22.907665       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8666/default: secrets "default-token-sp9ts" is forbidden: unable to create new content in namespace azuredisk-8666 because it is being terminated
I0625 20:17:22.910003       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-5xp2b.16fbf65b1a810a69, uid 800c7493-0873-4455-9498-4c6d180970f6, event type delete
I0625 20:17:22.913336       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-5xp2b.16fbf65befbfc627, uid 52dad7f6-c5b2-4d6a-af06-065734993b6a, event type delete
I0625 20:17:22.916845       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-5xp2b.16fbf65bf3801bbb, uid a94dda77-8dca-483f-bcf6-df688b64ec55, event type delete
I0625 20:17:22.921971       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-5xp2b.16fbf65bfa7ec40f, uid ecdfd25d-d0c1-46da-bbfa-df0c9df4e9a1, event type delete
I0625 20:17:22.925455       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-5xp2b.16fbf65cbb9ae046, uid e994dcf9-8f36-4548-a380-1aa79e15134d, event type delete
I0625 20:17:22.928970       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name pvc-6spjn.16fbf657b1dc118f, uid 4e1bb9be-6fc3-429b-b04c-621274d18c77, event type delete
... skipping 239 lines ...
I0625 20:18:12.454557       1 taint_manager.go:401] "Noticed pod update" pod="azuredisk-7886/azuredisk-volume-tester-4mkgm-0"
I0625 20:18:12.456394       1 stateful_set.go:223] Pod azuredisk-volume-tester-4mkgm-0 updated, objectMeta {Name:azuredisk-volume-tester-4mkgm-0 GenerateName:azuredisk-volume-tester-4mkgm- Namespace:azuredisk-7886 SelfLink: UID:3285ca68-7a60-434d-a3db-4a4931457911 ResourceVersion:6723 Generation:0 CreationTimestamp:2022-06-25 20:18:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5956761487052003905 controller-revision-hash:azuredisk-volume-tester-4mkgm-5ccc4f5bfd statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-4mkgm-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-4mkgm UID:ff5cf5eb-7bbb-47f2-a05f-2f6e53d2a807 Controller:0xc002cdb26e BlockOwnerDeletion:0xc002cdb26f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 20:18:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff5cf5eb-7bbb-47f2-a05f-2f6e53d2a807\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-4mkgm-0 GenerateName:azuredisk-volume-tester-4mkgm- Namespace:azuredisk-7886 SelfLink: UID:3285ca68-7a60-434d-a3db-4a4931457911 ResourceVersion:6725 Generation:0 CreationTimestamp:2022-06-25 20:18:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5956761487052003905 controller-revision-hash:azuredisk-volume-tester-4mkgm-5ccc4f5bfd statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-4mkgm-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-4mkgm UID:ff5cf5eb-7bbb-47f2-a05f-2f6e53d2a807 Controller:0xc002cdb8ce BlockOwnerDeletion:0xc002cdb8cf}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-06-25 20:18:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff5cf5eb-7bbb-47f2-a05f-2f6e53d2a807\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-06-25 20:18:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0625 20:18:12.456662       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-4mkgm-0"
I0625 20:18:12.456686       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-4mkgm-0, PodDisruptionBudget controller will avoid syncing.
I0625 20:18:12.456694       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-4mkgm-0"
I0625 20:18:12.480069       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-3a0f8b11-070d-4a72-a6a4-e7b05ac31b22 on node "capz-52trnh-mp-0000001"
W0625 20:18:12.480164       1 reconciler.go:344] Multi-Attach error for volume "pvc-3a0f8b11-070d-4a72-a6a4-e7b05ac31b22" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-52trnh/providers/Microsoft.Compute/disks/pvc-3a0f8b11-070d-4a72-a6a4-e7b05ac31b22") from node "capz-52trnh-mp-0000000" Volume is already exclusively attached to node capz-52trnh-mp-0000001 and can't be attached to another
I0625 20:18:12.480456       1 event.go:294] "Event occurred" object="azuredisk-7886/azuredisk-volume-tester-4mkgm-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-3a0f8b11-070d-4a72-a6a4-e7b05ac31b22\" Volume is already exclusively attached to one node and can't be attached to another"
I0625 20:18:12.496476       1 stateful_set_control.go:115] StatefulSet azuredisk-7886/azuredisk-volume-tester-4mkgm pod status replicas=1 ready=0 current=1 updated=1
I0625 20:18:12.496501       1 stateful_set_control.go:123] StatefulSet azuredisk-7886/azuredisk-volume-tester-4mkgm revisions current=azuredisk-volume-tester-4mkgm-5ccc4f5bfd update=azuredisk-volume-tester-4mkgm-5ccc4f5bfd
I0625 20:18:12.496513       1 stateful_set.go:479] Successfully synced StatefulSet azuredisk-7886/azuredisk-volume-tester-4mkgm successful
I0625 20:18:12.496523       1 stateful_set.go:434] Finished syncing statefulset "azuredisk-7886/azuredisk-volume-tester-4mkgm" (68.374709ms)
I0625 20:18:12.496603       1 stateful_set.go:472] Syncing StatefulSet azuredisk-7886/azuredisk-volume-tester-4mkgm with 1 pods
I0625 20:18:12.500089       1 stateful_set_control.go:379] StatefulSet azuredisk-7886/azuredisk-volume-tester-4mkgm has 1 unhealthy Pods starting with azuredisk-volume-tester-4mkgm-0
... skipping 179 lines ...
I0625 20:19:00.458167       1 publisher.go:186] Finished syncing namespace "azuredisk-8470" (8.083426ms)
I0625 20:19:00.461962       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8470" (12.182739ms)
I0625 20:19:01.783133       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8470" (5µs)
I0625 20:19:02.178951       1 httplog.go:129] "HTTP" verb="GET" URI="/healthz" latency="112.3µs" userAgent="kube-probe/1.23+" audit-ID="" srcIP="127.0.0.1:34766" resp=200
I0625 20:19:02.394488       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7886
I0625 20:19:02.422256       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7886, name default-token-s9q5k, uid 03d4a440-1920-46bb-9dae-916bda608448, event type delete
E0625 20:19:02.438090       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7886/default: secrets "default-token-l87td" is forbidden: unable to create new content in namespace azuredisk-7886 because it is being terminated
I0625 20:19:02.479645       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7886/default), service account deleted, removing tokens
I0625 20:19:02.480105       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7886, name default, uid 6b87a405-c15b-4018-9898-2dbbb837f86f, event type delete
I0625 20:19:02.480323       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7886" (4.1µs)
I0625 20:19:02.487495       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7886, name azuredisk-volume-tester-4mkgm-0.16fbf66e7a8d1636, uid 6e16e9c2-f612-4375-a193-26ed9284a818, event type delete
I0625 20:19:02.490775       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7886, name azuredisk-volume-tester-4mkgm-0.16fbf67125e14327, uid 8a272e0f-8806-4a68-9b1d-53d325a18c4d, event type delete
I0625 20:19:02.494387       1 resource_quota_monitor.go:359] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7886, name azuredisk-volume-tester-4mkgm-0.16fbf671ec0fc00a, uid e858c817-2131-4ebb-956f-89453769a763, event type delete
... skipping 44 lines ...
I0625 20:19:02.849254       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 4 items received
I0625 20:19:02.859976       1 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 3 items received
I0625 20:19:03.838796       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9103
I0625 20:19:03.887051       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9103, name default-token-fk5bp, uid dcbca24a-fa31-47b1-ada6-ebe64129c7ef, event type delete
I0625 20:19:03.902107       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (3.9µs)
I0625 20:19:03.904529       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9103, name default, uid e7d32761-aa34-4fc3-9444-b0163e2720e8, event type delete
E0625 20:19:03.907855       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9103/default: secrets "default-token-l866x" is forbidden: unable to create new content in namespace azuredisk-9103 because it is being terminated
I0625 20:19:03.907909       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9103/default), service account deleted, removing tokens
I0625 20:19:03.954746       1 resource_quota_monitor.go:359] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9103, name kube-root-ca.crt, uid 8c7aca6d-125c-49a4-bbfd-933d6fd7833b, event type delete
I0625 20:19:03.956664       1 publisher.go:186] Finished syncing namespace "azuredisk-9103" (2.163707ms)
I0625 20:19:04.004669       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (3.7µs)
I0625 20:19:04.005744       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9103, estimate: 0, errors: <nil>
I0625 20:19:04.015900       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9103" (180.415285ms)
2022/06/25 20:19:05 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [single-az] [It] should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:503

Ran 12 of 59 Specs in 1647.464 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 5 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

--- FAIL: TestE2E (1647.49s)
FAIL
FAIL	sigs.k8s.io/azuredisk-csi-driver/test/e2e	1647.544s
FAIL
make: *** [Makefile:260: e2e-test] Error 1
NAME                              STATUS   ROLES                  AGE   VERSION                         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
capz-52trnh-control-plane-42w5n   Ready    control-plane,master   32m   v1.23.9-rc.0.3+d11725a28e7e6e   10.0.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-52trnh-mp-0000000            Ready    <none>                 30m   v1.23.9-rc.0.3+d11725a28e7e6e   10.1.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-52trnh-mp-0000001            Ready    <none>                 30m   v1.23.9-rc.0.3+d11725a28e7e6e   10.1.0.5      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
NAMESPACE        NAME                                                      READY   STATUS        RESTARTS   AGE   IP                NODE                              NOMINATED NODE   READINESS GATES
azuredisk-7886   azuredisk-volume-tester-4mkgm-0                           1/1     Terminating   0          55s   192.168.171.200   capz-52trnh-mp-0000000            <none>           <none>
... skipping 24 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-67454cb5-pg57p, container manager
STEP: Dumping workload cluster default/capz-52trnh logs
Jun 25 20:20:35.689: INFO: Collecting logs for Linux node capz-52trnh-control-plane-42w5n in cluster capz-52trnh in namespace default

Jun 25 20:21:35.691: INFO: Collecting boot logs for AzureMachine capz-52trnh-control-plane-42w5n

Failed to get logs for machine capz-52trnh-control-plane-tgwcg, cluster default/capz-52trnh: open /etc/azure-ssh/azure-ssh: no such file or directory
Jun 25 20:21:37.830: INFO: Collecting logs for Linux node capz-52trnh-mp-0000000 in cluster capz-52trnh in namespace default

Jun 25 20:22:37.832: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-52trnh-mp-0

Jun 25 20:22:38.488: INFO: Collecting logs for Linux node capz-52trnh-mp-0000001 in cluster capz-52trnh in namespace default

Jun 25 20:23:38.490: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-52trnh-mp-0

Failed to get logs for machine pool capz-52trnh-mp-0, cluster default/capz-52trnh: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-52trnh kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-zwbcq, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-node-vl289
STEP: Creating log watcher for controller kube-system/coredns-64897985d-zmtzr, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-52trnh-control-plane-42w5n, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-52trnh-control-plane-42w5n
STEP: Fetching kube-system pod logs took 1.106563376s
STEP: Dumping workload cluster default/capz-52trnh Azure activity log
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-52trnh-control-plane-42w5n
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-tp48g, container csi-attacher
STEP: failed to find events of Pod "kube-scheduler-capz-52trnh-control-plane-42w5n"
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-tp48g, container csi-resizer
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-tp48g, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-tp48g, container azuredisk
STEP: Collecting events for Pod kube-system/csi-azuredisk-controller-78b647c5f4-tp48g
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n, container csi-provisioner
STEP: failed to find events of Pod "kube-controller-manager-capz-52trnh-control-plane-42w5n"
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n, container csi-attacher
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-zwbcq
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n, container csi-resizer
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-vdc5n, container azuredisk
... skipping 12 lines ...
STEP: Creating log watcher for controller kube-system/coredns-64897985d-8jkks, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-r2pf6, container calico-node
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-78b647c5f4-tp48g, container csi-provisioner
STEP: Collecting events for Pod kube-system/coredns-64897985d-zmtzr
STEP: Collecting events for Pod kube-system/coredns-64897985d-8jkks
STEP: Collecting events for Pod kube-system/etcd-capz-52trnh-control-plane-42w5n
STEP: failed to find events of Pod "etcd-capz-52trnh-control-plane-42w5n"
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-667c64999f-cb5nf, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-667c64999f-cb5nf
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-qvk4t, container azuredisk
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-667c64999f-q728s, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-667c64999f-q728s
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-qvk4t, container node-driver-registrar
... skipping 8 lines ...
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-52trnh-control-plane-42w5n
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-52trnh-control-plane-42w5n, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-cqpwr
STEP: Collecting events for Pod kube-system/kube-proxy-lmvr9
STEP: Creating log watcher for controller kube-system/kube-proxy-fmkf9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-lmvr9, container kube-proxy
STEP: failed to find events of Pod "kube-apiserver-capz-52trnh-control-plane-42w5n"
STEP: Fetching activity logs took 1.141267988s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-52trnh" deleted
kind delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...