This job view page is being replaced by Spyglass soon. Check out the new job view.
PRthockin: Rename some type names that had "Type" in them
ResultABORTED
Tests 0 failed / 9 succeeded
Started2021-11-23 16:41
Elapsed57m49s
Revisionbb6234ce8777a786998ed2ab13b4bae513b46a4e
Refs 106300

No Test Failures!


Show 9 Passed Tests

Show 44 Skipped Tests

Error lines from build-log.txt

... skipping 203 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov 23 17:10:59.240: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mn7x9" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Nov 23 17:10:59.277: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 37.060233ms
Nov 23 17:11:01.314: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074066584s
Nov 23 17:11:03.351: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111507963s
Nov 23 17:11:05.388: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148501421s
Nov 23 17:11:07.426: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186618243s
Nov 23 17:11:09.463: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.223735136s
... skipping 59 lines ...
Nov 23 17:13:11.738: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Running", Reason="", readiness=true. Elapsed: 2m12.498665722s
Nov 23 17:13:13.775: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Running", Reason="", readiness=true. Elapsed: 2m14.535450292s
Nov 23 17:13:15.815: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Running", Reason="", readiness=true. Elapsed: 2m16.574944283s
Nov 23 17:13:17.853: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Running", Reason="", readiness=true. Elapsed: 2m18.613062077s
Nov 23 17:13:19.891: INFO: Pod "azuredisk-volume-tester-mn7x9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m20.651125716s
STEP: Saw pod success
Nov 23 17:13:19.891: INFO: Pod "azuredisk-volume-tester-mn7x9" satisfied condition "Succeeded or Failed"
Nov 23 17:13:19.891: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-mn7x9"
Nov 23 17:13:19.984: INFO: Pod azuredisk-volume-tester-mn7x9 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-mn7x9 in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
... skipping 163 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Nov 23 17:14:36.642: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-cfqzw" in namespace "azuredisk-9947" to be "Error status code"
Nov 23 17:14:36.678: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.759305ms
Nov 23 17:14:38.715: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073884831s
Nov 23 17:14:40.753: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11131245s
Nov 23 17:14:42.791: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149299271s
Nov 23 17:14:44.828: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186809884s
Nov 23 17:14:46.866: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224606698s
... skipping 18 lines ...
Nov 23 17:15:25.595: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 48.953801974s
Nov 23 17:15:27.633: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 50.991320431s
Nov 23 17:15:29.669: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Pending", Reason="", readiness=false. Elapsed: 53.027816888s
Nov 23 17:15:31.707: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Running", Reason="", readiness=true. Elapsed: 55.065700624s
Nov 23 17:15:33.744: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Running", Reason="", readiness=true. Elapsed: 57.102461751s
Nov 23 17:15:35.782: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Running", Reason="", readiness=true. Elapsed: 59.140052028s
Nov 23 17:15:37.822: INFO: Pod "azuredisk-volume-tester-cfqzw": Phase="Failed", Reason="", readiness=false. Elapsed: 1m1.180035856s
STEP: Saw pod failure
Nov 23 17:15:37.822: INFO: Pod "azuredisk-volume-tester-cfqzw" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Nov 23 17:15:37.901: INFO: deleting Pod "azuredisk-9947"/"azuredisk-volume-tester-cfqzw"
Nov 23 17:15:37.950: INFO: Pod azuredisk-volume-tester-cfqzw has the following logs: out-file : Access to the path 'C:\mnt\test-1\data' is denied.
At line:1 char:1
+ echo $null >> C:\mnt\test-1\data
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 406 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov 23 17:26:11.217: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5g6sl" in namespace "azuredisk-8510" to be "Succeeded or Failed"
Nov 23 17:26:11.269: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 51.893226ms
Nov 23 17:26:13.306: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088512762s
Nov 23 17:26:15.344: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126545129s
Nov 23 17:26:17.381: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164399869s
Nov 23 17:26:19.418: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201308351s
Nov 23 17:26:21.456: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238871392s
... skipping 33 lines ...
Nov 23 17:27:30.783: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.565635008s
Nov 23 17:27:32.820: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.603394324s
Nov 23 17:27:34.858: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.641028391s
Nov 23 17:27:36.895: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Running", Reason="", readiness=true. Elapsed: 1m25.678299814s
Nov 23 17:27:38.933: INFO: Pod "azuredisk-volume-tester-5g6sl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m27.716221011s
STEP: Saw pod success
Nov 23 17:27:38.933: INFO: Pod "azuredisk-volume-tester-5g6sl" satisfied condition "Succeeded or Failed"
Nov 23 17:27:38.933: INFO: deleting Pod "azuredisk-8510"/"azuredisk-volume-tester-5g6sl"
Nov 23 17:27:39.021: INFO: Pod azuredisk-volume-tester-5g6sl has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-5g6sl in namespace azuredisk-8510
... skipping 167 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov 23 17:29:15.240: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xkm8z" in namespace "azuredisk-953" to be "Succeeded or Failed"
Nov 23 17:29:15.282: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 41.265917ms
Nov 23 17:29:17.319: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078445084s
Nov 23 17:29:19.357: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11652711s
Nov 23 17:29:21.399: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158811083s
Nov 23 17:29:23.440: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199408215s
Nov 23 17:29:25.477: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236168347s
... skipping 34 lines ...
Nov 23 17:30:36.832: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.591119917s
Nov 23 17:30:38.870: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.629968908s
Nov 23 17:30:40.907: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Running", Reason="", readiness=true. Elapsed: 1m25.666736973s
Nov 23 17:30:42.944: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Running", Reason="", readiness=true. Elapsed: 1m27.703446261s
Nov 23 17:30:44.982: INFO: Pod "azuredisk-volume-tester-xkm8z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m29.741769093s
STEP: Saw pod success
Nov 23 17:30:44.982: INFO: Pod "azuredisk-volume-tester-xkm8z" satisfied condition "Succeeded or Failed"
Nov 23 17:30:44.982: INFO: deleting Pod "azuredisk-953"/"azuredisk-volume-tester-xkm8z"
Nov 23 17:30:45.090: INFO: Pod azuredisk-volume-tester-xkm8z has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-xkm8z in namespace azuredisk-953
STEP: validating provisioned PV
STEP: checking the PV
... skipping 369 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160
STEP: Creating a kubernetes client
Nov 23 17:36:46.686: INFO: >>> kubeConfig: /root/tmp371259084/kubeconfig/kubeconfig.southcentralus.json
STEP: Building a namespace api object, basename azuredisk
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-4885
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.441 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
... skipping 281 lines ...
I1123 17:06:57.441751       1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=100, bucket=1000
I1123 17:06:57.441760       1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000
I1123 17:06:57.441773       1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=100, bucket=1000
I1123 17:06:57.441818       1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000
I1123 17:06:57.441939       1 azure.go:742] Setting up informers for Azure cloud provider
I1123 17:06:57.442895       1 shared_informer.go:240] Waiting for caches to sync for tokens
W1123 17:06:57.516170       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I1123 17:06:57.516204       1 controllermanager.go:576] Starting "cronjob"
I1123 17:06:57.528248       1 controllermanager.go:605] Started "cronjob"
I1123 17:06:57.528462       1 controllermanager.go:576] Starting "clusterrole-aggregation"
I1123 17:06:57.528302       1 cronjob_controllerv2.go:132] "Starting cronjob controller v2"
I1123 17:06:57.529677       1 shared_informer.go:240] Waiting for caches to sync for cronjob
I1123 17:06:57.537583       1 controllermanager.go:605] Started "clusterrole-aggregation"
... skipping 203 lines ...
I1123 17:07:01.399250       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I1123 17:07:01.399310       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I1123 17:07:01.399545       1 controllermanager.go:605] Started "attachdetach"
I1123 17:07:01.399574       1 controllermanager.go:576] Starting "pvc-protection"
I1123 17:07:01.399646       1 attach_detach_controller.go:328] Starting attach detach controller
I1123 17:07:01.399660       1 shared_informer.go:240] Waiting for caches to sync for attach detach
W1123 17:07:01.399754       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-22071685-0" does not exist
I1123 17:07:01.552511       1 controllermanager.go:605] Started "pvc-protection"
I1123 17:07:01.552549       1 controllermanager.go:576] Starting "replicaset"
I1123 17:07:01.552652       1 pvc_protection_controller.go:103] "Starting PVC protection controller"
I1123 17:07:01.552732       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
I1123 17:07:01.698036       1 controllermanager.go:605] Started "replicaset"
I1123 17:07:01.698099       1 replica_set.go:186] Starting replicaset controller
... skipping 67 lines ...
I1123 17:07:52.034377       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:07:57.035539       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:07:58.029294       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-7b8cbf4b7f" need=1 creating=1
I1123 17:07:58.029997       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-7b8cbf4b7f to 1"
I1123 17:07:58.052004       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-77f47bb58d" need=1 creating=1
I1123 17:07:58.073672       1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-77f47bb58d to 1"
I1123 17:07:58.076786       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:07:58.143206       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:07:58.144347       1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-77f47bb58d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-77f47bb58d-n6mfs"
I1123 17:07:58.148259       1 event.go:294] "Event occurred" object="kube-system/coredns-7b8cbf4b7f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-7b8cbf4b7f-cldgk"
I1123 17:08:00.876368       1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-xnx6l"
I1123 17:08:00.876405       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rrrjb"
I1123 17:08:02.036454       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:03.721643       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azuredisk-controller-795fd5f745" need=1 creating=1
I1123 17:08:03.738247       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azuredisk-controller-795fd5f745 to 1"
I1123 17:08:03.762591       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-795fd5f745" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-795fd5f745-4qhl9"
I1123 17:08:03.856361       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:08:03.928261       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:08:04.257989       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-58d6d6cfd9" need=1 creating=1
I1123 17:08:04.260872       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-58d6d6cfd9 to 1"
I1123 17:08:04.308515       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:08:04.311981       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-58d6d6cfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-58d6d6cfd9-znvqz"
I1123 17:08:04.773005       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/metrics-server-554946c849" need=1 creating=1
I1123 17:08:04.789722       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-554946c849 to 1"
I1123 17:08:04.809849       1 event.go:294] "Event occurred" object="kube-system/metrics-server-554946c849" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-554946c849-bjrkk"
I1123 17:08:04.880339       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:08:07.036720       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:12.039269       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:17.039913       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:22.040493       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:27.041589       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:32.042273       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
E1123 17:08:32.328395       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1123 17:08:32.764448       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1123 17:08:37.043255       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:42.044132       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:47.045100       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:52.046058       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
W1123 17:08:55.118322       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="2207k8s001" does not exist
I1123 17:08:55.147603       1 ttl_controller.go:276] "Changed ttl annotation" node="2207k8s001" new_ttl="0s"
I1123 17:08:57.046946       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::0
I1123 17:08:57.046987       1 node_lifecycle_controller.go:769] Controller observed a new Node: "2207k8s001"
I1123 17:08:57.047021       1 controller_utils.go:172] "Recording event message for node" event="Registered Node 2207k8s001 in Controller" node="2207k8s001"
W1123 17:08:57.047149       1 node_lifecycle_controller.go:1012] Missing timestamp for Node 2207k8s001. Assuming now as a timestamp.
I1123 17:08:57.047176       1 node_lifecycle_controller.go:868] Node 2207k8s001 is NotReady as of 2021-11-23 17:08:57.047167168 +0000 UTC m=+124.590019193. Adding it to the Taint queue.
... skipping 6 lines ...
I1123 17:09:05.485837       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-node-windows" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-node-windows-22kx9"
I1123 17:09:05.499998       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-node-windows" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-node-windows-vszv8"
I1123 17:09:05.511705       1 controller_utils.go:122] "Update ready status of pods on node" node="2207k8s001"
I1123 17:09:05.573350       1 controller_utils.go:122] "Update ready status of pods on node" node="2207k8s001"
I1123 17:09:07.048178       1 node_lifecycle_controller.go:892] Node 2207k8s001 is healthy again, removing all taints
I1123 17:09:07.048221       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
W1123 17:09:13.169245       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="2207k8s000" does not exist
I1123 17:09:13.182783       1 ttl_controller.go:276] "Changed ttl annotation" node="2207k8s000" new_ttl="0s"
I1123 17:09:17.050411       1 node_lifecycle_controller.go:769] Controller observed a new Node: "2207k8s000"
I1123 17:09:17.050454       1 controller_utils.go:172] "Recording event message for node" event="Registered Node 2207k8s000 in Controller" node="2207k8s000"
I1123 17:09:17.050527       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: southcentralus::1
I1123 17:09:17.051498       1 event.go:294] "Event occurred" object="2207k8s000" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 2207k8s000 event: Registered Node 2207k8s000 in Controller"
W1123 17:09:17.051568       1 node_lifecycle_controller.go:1012] Missing timestamp for Node 2207k8s000. Assuming now as a timestamp.
... skipping 24 lines ...
I1123 17:13:20.166240       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-8081/pvc-mb5tr"
I1123 17:13:20.180770       1 pv_controller.go:648] volume "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10" is released and reclaim policy "Delete" will be executed
I1123 17:13:20.188639       1 pv_controller.go:887] volume "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10" entered phase "Released"
I1123 17:13:31.527119       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10") on node "2207k8s001" 
I1123 17:13:31.532717       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10") on node "2207k8s001" 
I1123 17:13:47.545551       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10") on node "2207k8s001" 
E1123 17:14:28.600323       1 pv_protection_controller.go:114] PV pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10": the object has been modified; please apply your changes to the latest version and try again
I1123 17:14:28.613531       1 pv_controller_base.go:533] deletion of claim "azuredisk-8081/pvc-mb5tr" was already processed
E1123 17:14:28.618053       1 pv_protection_controller.go:114] PV pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-43abe9fd-3d9f-4388-8c14-4a8f57104f10, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d9e47806-b2d8-4e98-9ada-aff39222fb68, UID in object meta: 
I1123 17:14:36.593282       1 event.go:294] "Event occurred" object="azuredisk-9947/pvc-2fztg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:14:36.654609       1 event.go:294] "Event occurred" object="azuredisk-9947/pvc-2fztg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1123 17:14:37.140860       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1318/default: secrets "default-token-vwtwx" is forbidden: unable to create new content in namespace azuredisk-1318 because it is being terminated
I1123 17:14:39.070509       1 pv_controller.go:887] volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" entered phase "Bound"
I1123 17:14:39.070565       1 pv_controller.go:990] volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" bound to claim "azuredisk-9947/pvc-2fztg"
I1123 17:14:39.090858       1 pv_controller.go:831] claim "azuredisk-9947/pvc-2fztg" entered phase "Bound"
E1123 17:14:39.489958       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3274/default: secrets "default-token-dpfmj" is forbidden: unable to create new content in namespace azuredisk-3274 because it is being terminated
I1123 17:14:39.725623       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8") from node "2207k8s001" 
I1123 17:14:41.141301       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8081
I1123 17:14:42.270910       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1318
I1123 17:14:43.420804       1 namespace_controller.go:185] Namespace has been deleted azuredisk-694
I1123 17:14:44.611625       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3274
I1123 17:14:45.726446       1 namespace_controller.go:185] Namespace has been deleted azuredisk-495
... skipping 2 lines ...
I1123 17:15:38.116504       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-9947/pvc-2fztg"
I1123 17:15:38.130601       1 pv_controller.go:648] volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" is released and reclaim policy "Delete" will be executed
I1123 17:15:38.152140       1 pv_controller.go:887] volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" entered phase "Released"
I1123 17:15:43.866629       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8") on node "2207k8s001" 
I1123 17:15:43.873332       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8") on node "2207k8s001" 
I1123 17:15:59.846893       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8") on node "2207k8s001" 
E1123 17:16:14.385367       1 pv_protection_controller.go:114] PV pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-824a61d8-d962-4f2c-a63b-d32d933a50e8": the object has been modified; please apply your changes to the latest version and try again
I1123 17:16:14.392697       1 pv_controller_base.go:533] deletion of claim "azuredisk-9947/pvc-2fztg" was already processed
I1123 17:16:19.808721       1 event.go:294] "Event occurred" object="azuredisk-5541/pvc-zl2ps" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:16:19.867182       1 event.go:294] "Event occurred" object="azuredisk-5541/pvc-zl2ps" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:16:22.245253       1 pv_controller.go:887] volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" entered phase "Bound"
I1123 17:16:22.245340       1 pv_controller.go:990] volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" bound to claim "azuredisk-5541/pvc-zl2ps"
I1123 17:16:22.264932       1 pv_controller.go:831] claim "azuredisk-5541/pvc-zl2ps" entered phase "Bound"
... skipping 28 lines ...
I1123 17:19:36.960592       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-5541/pvc-vfcww"
I1123 17:19:36.973602       1 pv_controller.go:648] volume "pvc-b179e57a-a056-464e-b587-74a8395af286" is released and reclaim policy "Delete" will be executed
I1123 17:19:36.977755       1 pv_controller.go:887] volume "pvc-b179e57a-a056-464e-b587-74a8395af286" entered phase "Released"
I1123 17:19:47.156591       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-b179e57a-a056-464e-b587-74a8395af286" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-b179e57a-a056-464e-b587-74a8395af286") on node "2207k8s000" 
I1123 17:19:47.160916       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-b179e57a-a056-464e-b587-74a8395af286" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-b179e57a-a056-464e-b587-74a8395af286") on node "2207k8s000" 
I1123 17:20:02.987247       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-b179e57a-a056-464e-b587-74a8395af286" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-b179e57a-a056-464e-b587-74a8395af286") on node "2207k8s000" 
E1123 17:20:45.310109       1 pv_protection_controller.go:114] PV pvc-b179e57a-a056-464e-b587-74a8395af286 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-b179e57a-a056-464e-b587-74a8395af286": the object has been modified; please apply your changes to the latest version and try again
I1123 17:20:45.325991       1 pv_controller_base.go:533] deletion of claim "azuredisk-5541/pvc-vfcww" was already processed
E1123 17:20:45.329747       1 pv_protection_controller.go:114] PV pvc-b179e57a-a056-464e-b587-74a8395af286 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-b179e57a-a056-464e-b587-74a8395af286": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-b179e57a-a056-464e-b587-74a8395af286, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a6f18598-f43e-44cc-9a72-9aad2f9587c3, UID in object meta: 
I1123 17:20:46.480844       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-mwpsx" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:46.480886       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:48.941521       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-mwpsx" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:48.941557       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:52.839801       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-mwpsx" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:52.839865       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:52.868446       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-5541/pvc-44lkc"
I1123 17:20:52.886396       1 pv_controller.go:648] volume "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f" is released and reclaim policy "Delete" will be executed
I1123 17:20:52.893751       1 pv_controller.go:887] volume "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f" entered phase "Released"
I1123 17:20:58.086271       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f") on node "2207k8s000" 
I1123 17:20:58.090755       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f") on node "2207k8s000" 
I1123 17:21:14.279310       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f") on node "2207k8s000" 
E1123 17:21:29.229601       1 pv_protection_controller.go:114] PV pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f": the object has been modified; please apply your changes to the latest version and try again
I1123 17:21:29.238149       1 pv_controller_base.go:533] deletion of claim "azuredisk-5541/pvc-44lkc" was already processed
E1123 17:21:29.249221       1 pv_protection_controller.go:114] PV pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-cf7970d2-4958-43fd-94ff-8f1912802c0f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0459f98e-e183-4924-af62-06c166c1c94d, UID in object meta: 
I1123 17:21:32.178586       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-l8dw2" PVC="azuredisk-5541/pvc-zl2ps"
I1123 17:21:32.178633       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-zl2ps"
I1123 17:21:35.186446       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-l8dw2" PVC="azuredisk-5541/pvc-zl2ps"
I1123 17:21:35.186481       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-zl2ps"
I1123 17:21:37.592371       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-l8dw2" PVC="azuredisk-5541/pvc-zl2ps"
I1123 17:21:37.592405       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-zl2ps"
... skipping 2 lines ...
I1123 17:21:37.612345       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-5541/pvc-zl2ps"
I1123 17:21:37.629939       1 pv_controller.go:648] volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" is released and reclaim policy "Delete" will be executed
I1123 17:21:37.643078       1 pv_controller.go:887] volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" entered phase "Released"
I1123 17:21:41.232689       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63") on node "2207k8s001" 
I1123 17:21:41.236169       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63") on node "2207k8s001" 
I1123 17:21:57.380254       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63") on node "2207k8s001" 
E1123 17:22:14.000615       1 pv_protection_controller.go:114] PV pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-b12b61f0-1e59-4749-b1a9-2bc43621bc63": the object has been modified; please apply your changes to the latest version and try again
I1123 17:22:14.012157       1 pv_controller_base.go:533] deletion of claim "azuredisk-5541/pvc-zl2ps" was already processed
I1123 17:22:19.162001       1 event.go:294] "Event occurred" object="azuredisk-5356/pvc-tkrnh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:22:19.215981       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44" need=1 creating=1
I1123 17:22:19.217368       1 event.go:294] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-bghq8" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azuredisk-volume-tester-bghq8-5bbf87cf44 to 1"
I1123 17:22:19.234705       1 event.go:294] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-bghq8-5bbf87cf44-zw8wv"
I1123 17:22:19.239063       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-bghq8" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-bghq8\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:22:19.275248       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-bghq8" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-bghq8\": the object has been modified; please apply your changes to the latest version and try again"
I1123 17:22:19.276105       1 event.go:294] "Event occurred" object="azuredisk-5356/pvc-tkrnh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:22:19.276313       1 event.go:294] "Event occurred" object="azuredisk-5356/pvc-tkrnh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:22:21.621190       1 pv_controller.go:887] volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" entered phase "Bound"
I1123 17:22:21.621283       1 pv_controller.go:990] volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" bound to claim "azuredisk-5356/pvc-tkrnh"
I1123 17:22:21.635538       1 pv_controller.go:831] claim "azuredisk-5356/pvc-tkrnh" entered phase "Bound"
I1123 17:22:22.328716       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") from node "2207k8s001" 
I1123 17:22:28.117770       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5541
I1123 17:22:33.714276       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") from node "2207k8s001" 
I1123 17:22:33.714372       1 event.go:294] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44-zw8wv" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80\" "
I1123 17:23:21.339718       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44" need=1 creating=1
I1123 17:23:21.422662       1 event.go:294] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-bghq8-5bbf87cf44-ctw7v"
W1123 17:23:21.515582       1 reconciler.go:385] Multi-Attach error for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") from node "2207k8s000" Volume is already used by pods azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44-zw8wv on node 2207k8s001
I1123 17:23:21.515803       1 event.go:294] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44-ctw7v" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80\" Volume is already used by pod(s) azuredisk-volume-tester-bghq8-5bbf87cf44-zw8wv"
I1123 17:23:34.320247       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") on node "2207k8s001" 
I1123 17:23:34.325945       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") on node "2207k8s001" 
I1123 17:23:50.547148       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") on node "2207k8s001" 
I1123 17:23:50.629371       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") from node "2207k8s000" 
I1123 17:24:02.046549       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") from node "2207k8s000" 
I1123 17:24:02.046716       1 event.go:294] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-bghq8-5bbf87cf44-ctw7v" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80\" "
... skipping 11 lines ...
I1123 17:25:06.269355       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-5356/pvc-tkrnh"
I1123 17:25:06.283462       1 pv_controller.go:648] volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" is released and reclaim policy "Delete" will be executed
I1123 17:25:06.292982       1 pv_controller.go:887] volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" entered phase "Released"
I1123 17:25:13.744004       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") on node "2207k8s000" 
I1123 17:25:13.752003       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") on node "2207k8s000" 
I1123 17:25:29.851662       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80") on node "2207k8s000" 
E1123 17:25:42.744510       1 pv_protection_controller.go:114] PV pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-f4a8bacc-0249-4f6a-b9dc-e9016a241b80": the object has been modified; please apply your changes to the latest version and try again
I1123 17:25:42.761412       1 pv_controller_base.go:533] deletion of claim "azuredisk-5356/pvc-tkrnh" was already processed
I1123 17:25:46.389674       1 event.go:294] "Event occurred" object="azuredisk-3090/pvc-bdc4w" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:25:47.113420       1 event.go:294] "Event occurred" object="azuredisk-3090/pvc-bdc4w" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:25:48.742027       1 pv_controller.go:887] volume "pvc-ffb282dd-a3af-43e4-b993-3f332e7d4454" entered phase "Bound"
I1123 17:25:48.742072       1 pv_controller.go:990] volume "pvc-ffb282dd-a3af-43e4-b993-3f332e7d4454" bound to claim "azuredisk-3090/pvc-bdc4w"
I1123 17:25:48.759503       1 pv_controller.go:831] claim "azuredisk-3090/pvc-bdc4w" entered phase "Bound"
E1123 17:25:50.602148       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-wgvfv" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I1123 17:25:50.616837       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-3090/pvc-bdc4w"
I1123 17:25:50.631739       1 pv_controller.go:648] volume "pvc-ffb282dd-a3af-43e4-b993-3f332e7d4454" is released and reclaim policy "Delete" will be executed
I1123 17:25:50.650496       1 pv_controller.go:887] volume "pvc-ffb282dd-a3af-43e4-b993-3f332e7d4454" entered phase "Released"
I1123 17:25:55.672167       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5356
I1123 17:25:55.861590       1 pv_controller_base.go:533] deletion of claim "azuredisk-3090/pvc-bdc4w" was already processed
E1123 17:26:05.957409       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3090/default: secrets "default-token-n5ld9" is forbidden: unable to create new content in namespace azuredisk-3090 because it is being terminated
E1123 17:26:07.185476       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4078/default: secrets "default-token-nt4qd" is forbidden: unable to create new content in namespace azuredisk-4078 because it is being terminated
E1123 17:26:10.627182       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2888/default: secrets "default-token-xxqs2" is forbidden: unable to create new content in namespace azuredisk-2888 because it is being terminated
I1123 17:26:11.012200       1 event.go:294] "Event occurred" object="azuredisk-8510/pvc-lb7pl" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:26:11.096103       1 event.go:294] "Event occurred" object="azuredisk-8510/pvc-bt4jg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:26:11.096166       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3090
I1123 17:26:11.170450       1 event.go:294] "Event occurred" object="azuredisk-8510/pvc-b8klb" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:26:11.225462       1 event.go:294] "Event occurred" object="azuredisk-8510/pvc-lb7pl" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:26:11.227664       1 event.go:294] "Event occurred" object="azuredisk-8510/pvc-lb7pl" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
... skipping 39 lines ...
I1123 17:28:49.957250       1 pv_controller.go:648] volume "pvc-c24db49d-9d73-41f7-8480-e24bd907588d" is released and reclaim policy "Delete" will be executed
I1123 17:28:49.962325       1 pv_controller.go:887] volume "pvc-c24db49d-9d73-41f7-8480-e24bd907588d" entered phase "Released"
I1123 17:28:55.246208       1 pv_controller_base.go:533] deletion of claim "azuredisk-8510/pvc-bt4jg" was already processed
I1123 17:29:00.256547       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-8510/pvc-lb7pl"
I1123 17:29:00.273298       1 pv_controller.go:648] volume "pvc-2625990d-d8a1-466e-875d-cdda80cd99af" is released and reclaim policy "Delete" will be executed
I1123 17:29:00.286444       1 pv_controller.go:887] volume "pvc-2625990d-d8a1-466e-875d-cdda80cd99af" entered phase "Released"
E1123 17:29:05.481012       1 pv_protection_controller.go:114] PV pvc-2625990d-d8a1-466e-875d-cdda80cd99af failed with : Operation cannot be fulfilled on persistentvolumes "pvc-2625990d-d8a1-466e-875d-cdda80cd99af": the object has been modified; please apply your changes to the latest version and try again
I1123 17:29:05.488382       1 pv_controller_base.go:533] deletion of claim "azuredisk-8510/pvc-lb7pl" was already processed
I1123 17:29:15.028350       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-s79p9" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:29:15.106195       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-ps4j8" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:29:15.190282       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-lwkhh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:29:15.248672       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-s79p9" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:29:15.298894       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-ps4j8" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:29:15.299501       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-lwkhh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:29:15.299981       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-lwkhh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1123 17:29:16.841649       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5561/default: secrets "default-token-zd68k" is forbidden: unable to create new content in namespace azuredisk-5561 because it is being terminated
I1123 17:29:17.123039       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-s79p9" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:29:17.123094       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-ps4j8" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:29:17.123121       1 event.go:294] "Event occurred" object="azuredisk-953/pvc-lwkhh" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:29:17.788803       1 pv_controller.go:887] volume "pvc-c0f59af6-2e1e-4af0-ac8a-73b30406e65b" entered phase "Bound"
I1123 17:29:17.789265       1 pv_controller.go:990] volume "pvc-c0f59af6-2e1e-4af0-ac8a-73b30406e65b" bound to claim "azuredisk-953/pvc-ps4j8"
I1123 17:29:17.817550       1 pv_controller.go:831] claim "azuredisk-953/pvc-ps4j8" entered phase "Bound"
... skipping 29 lines ...
I1123 17:31:31.296955       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-ddefeb2b-eeca-41ec-a559-c2376c84d6f2" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-ddefeb2b-eeca-41ec-a559-c2376c84d6f2") on node "2207k8s001" 
I1123 17:31:31.339698       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-2fafbd80-f5da-4ac5-bfdb-21c3cb508ab9" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-2fafbd80-f5da-4ac5-bfdb-21c3cb508ab9") on node "2207k8s001" 
I1123 17:31:53.538739       1 pv_controller_base.go:533] deletion of claim "azuredisk-953/pvc-lwkhh" was already processed
I1123 17:31:56.014764       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-953/pvc-ps4j8"
I1123 17:31:56.030923       1 pv_controller.go:648] volume "pvc-c0f59af6-2e1e-4af0-ac8a-73b30406e65b" is released and reclaim policy "Delete" will be executed
I1123 17:31:56.036631       1 pv_controller.go:887] volume "pvc-c0f59af6-2e1e-4af0-ac8a-73b30406e65b" entered phase "Released"
E1123 17:32:01.259106       1 pv_protection_controller.go:114] PV pvc-c0f59af6-2e1e-4af0-ac8a-73b30406e65b failed with : Operation cannot be fulfilled on persistentvolumes "pvc-c0f59af6-2e1e-4af0-ac8a-73b30406e65b": the object has been modified; please apply your changes to the latest version and try again
I1123 17:32:01.279423       1 pv_controller_base.go:533] deletion of claim "azuredisk-953/pvc-ps4j8" was already processed
I1123 17:32:06.327184       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-953/pvc-s79p9"
I1123 17:32:06.339621       1 pv_controller.go:648] volume "pvc-ddefeb2b-eeca-41ec-a559-c2376c84d6f2" is released and reclaim policy "Delete" will be executed
I1123 17:32:06.349843       1 pv_controller.go:887] volume "pvc-ddefeb2b-eeca-41ec-a559-c2376c84d6f2" entered phase "Released"
E1123 17:32:11.562134       1 pv_protection_controller.go:114] PV pvc-ddefeb2b-eeca-41ec-a559-c2376c84d6f2 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-ddefeb2b-eeca-41ec-a559-c2376c84d6f2": the object has been modified; please apply your changes to the latest version and try again
I1123 17:32:11.572978       1 pv_controller_base.go:533] deletion of claim "azuredisk-953/pvc-s79p9" was already processed
I1123 17:32:20.000710       1 deployment_controller.go:583] "Deployment has been deleted" deployment="azuredisk-5356/azuredisk-volume-tester-bghq8"
I1123 17:32:20.274158       1 event.go:294] "Event occurred" object="azuredisk-552/pvc-b4r6b" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:32:20.336143       1 event.go:294] "Event occurred" object="azuredisk-552/pvc-b4r6b" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:32:20.341898       1 event.go:294] "Event occurred" object="azuredisk-552/pvc-b4r6b" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1123 17:32:21.909828       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-953/default: secrets "default-token-c89sj" is forbidden: unable to create new content in namespace azuredisk-953 because it is being terminated
I1123 17:32:22.703081       1 pv_controller.go:887] volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" entered phase "Bound"
I1123 17:32:22.703131       1 pv_controller.go:990] volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" bound to claim "azuredisk-552/pvc-b4r6b"
I1123 17:32:22.717634       1 pv_controller.go:831] claim "azuredisk-552/pvc-b4r6b" entered phase "Bound"
E1123 17:32:22.936686       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3033/default: secrets "default-token-qpqgs" is forbidden: unable to create new content in namespace azuredisk-3033 because it is being terminated
I1123 17:32:23.413512       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace") from node "2207k8s001" 
I1123 17:32:26.985018       1 namespace_controller.go:185] Namespace has been deleted azuredisk-953
I1123 17:32:28.023792       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3033
I1123 17:32:29.425476       1 namespace_controller.go:185] Namespace has been deleted azuredisk-9336
I1123 17:32:34.691391       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace") from node "2207k8s001" 
I1123 17:32:34.691498       1 event.go:294] "Event occurred" object="azuredisk-552/azuredisk-volume-tester-xclwk" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace\" "
I1123 17:33:26.126796       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace") on node "2207k8s001" 
I1123 17:33:26.133355       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace") on node "2207k8s001" 
I1123 17:33:42.212759       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace") on node "2207k8s001" 
I1123 17:33:47.570128       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-552/pvc-b4r6b"
I1123 17:33:47.583551       1 pv_controller.go:648] volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" is released and reclaim policy "Delete" will be executed
I1123 17:33:47.593271       1 pv_controller.go:887] volume "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace" entered phase "Released"
E1123 17:33:52.832284       1 pv_protection_controller.go:114] PV pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace failed with : Operation cannot be fulfilled on persistentvolumes "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace": the object has been modified; please apply your changes to the latest version and try again
I1123 17:33:52.843780       1 pv_controller_base.go:533] deletion of claim "azuredisk-552/pvc-b4r6b" was already processed
E1123 17:33:52.846694       1 pv_protection_controller.go:114] PV pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace failed with : Operation cannot be fulfilled on persistentvolumes "pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-ee5771a1-ec3e-48f2-875b-c47d30716ace, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: eca5a7e1-5761-4cda-b84a-b2a4084ea442, UID in object meta: 
I1123 17:33:59.086642       1 event.go:294] "Event occurred" object="azuredisk-1351/azuredisk-volume-tester-mvwk8" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim pvc-azuredisk-volume-tester-mvwk8-0 Pod azuredisk-volume-tester-mvwk8-0 in StatefulSet azuredisk-volume-tester-mvwk8 success"
I1123 17:33:59.092925       1 event.go:294] "Event occurred" object="azuredisk-1351/pvc-azuredisk-volume-tester-mvwk8-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1123 17:33:59.099302       1 event.go:294] "Event occurred" object="azuredisk-1351/azuredisk-volume-tester-mvwk8" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod azuredisk-volume-tester-mvwk8-0 in StatefulSet azuredisk-volume-tester-mvwk8 successful"
I1123 17:33:59.119679       1 event.go:294] "Event occurred" object="azuredisk-1351/pvc-azuredisk-volume-tester-mvwk8-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:33:59.123293       1 event.go:294] "Event occurred" object="azuredisk-1351/pvc-azuredisk-volume-tester-mvwk8-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1123 17:34:01.618984       1 pv_controller.go:887] volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" entered phase "Bound"
I1123 17:34:01.619227       1 pv_controller.go:990] volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" bound to claim "azuredisk-1351/pvc-azuredisk-volume-tester-mvwk8-0"
I1123 17:34:01.636263       1 pv_controller.go:831] claim "azuredisk-1351/pvc-azuredisk-volume-tester-mvwk8-0" entered phase "Bound"
I1123 17:34:02.189762       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") from node "2207k8s001" 
I1123 17:34:08.107306       1 namespace_controller.go:185] Namespace has been deleted azuredisk-552
I1123 17:34:13.450803       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") from node "2207k8s001" 
I1123 17:34:13.451178       1 event.go:294] "Event occurred" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-0" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-cdb57246-3a07-4f89-9b29-16cec3710393\" "
I1123 17:35:03.942414       1 event.go:294] "Event occurred" object="azuredisk-1351/azuredisk-volume-tester-mvwk8" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod azuredisk-volume-tester-mvwk8-0 in StatefulSet azuredisk-volume-tester-mvwk8 successful"
W1123 17:35:04.050353       1 reconciler.go:344] Multi-Attach error for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") from node "2207k8s000" Volume is already exclusively attached to node 2207k8s001 and can't be attached to another
I1123 17:35:04.050491       1 event.go:294] "Event occurred" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-cdb57246-3a07-4f89-9b29-16cec3710393\" Volume is already exclusively attached to one node and can't be attached to another"
I1123 17:35:07.891923       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") on node "2207k8s001" 
I1123 17:35:07.898728       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") on node "2207k8s001" 
I1123 17:35:24.035526       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") on node "2207k8s001" 
I1123 17:35:24.110160       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") from node "2207k8s000" 
I1123 17:35:40.331677       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-lfkkxkks/providers/Microsoft.Compute/disks/pvc-cdb57246-3a07-4f89-9b29-16cec3710393") from node "2207k8s000" 
I1123 17:35:40.331998       1 event.go:294] "Event occurred" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-0" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-cdb57246-3a07-4f89-9b29-16cec3710393\" "
I1123 17:36:42.916809       1 garbagecollector.go:468] "Processing object" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-68c687c98f" objectUID=7a3ca2a6-115e-4d66-9667-71f7b654e31e kind="ControllerRevision" virtual=false
I1123 17:36:42.917151       1 stateful_set.go:443] StatefulSet has been deleted azuredisk-1351/azuredisk-volume-tester-mvwk8
I1123 17:36:42.917215       1 garbagecollector.go:468] "Processing object" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-0" objectUID=cc0d1ca3-e6b2-4dd5-a4d3-3553f348dd81 kind="Pod" virtual=false
I1123 17:36:42.944847       1 garbagecollector.go:580] "Deleting object" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-0" objectUID=cc0d1ca3-e6b2-4dd5-a4d3-3553f348dd81 kind="Pod" propagationPolicy=Background
I1123 17:36:42.945616       1 garbagecollector.go:580] "Deleting object" object="azuredisk-1351/azuredisk-volume-tester-mvwk8-68c687c98f" objectUID=7a3ca2a6-115e-4d66-9667-71f7b654e31e kind="ControllerRevision" propagationPolicy=Background
E1123 17:36:48.224771       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1351/default: secrets "default-token-w8x5w" is forbidden: unable to create new content in namespace azuredisk-1351 because it is being terminated
I1123 17:36:48.459610       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-1351/pvc-azuredisk-volume-tester-mvwk8-0"
I1123 17:36:48.480610       1 pv_controller.go:648] volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" is released and reclaim policy "Delete" will be executed
I1123 17:36:48.492708       1 pv_controller.go:887] volume "pvc-cdb57246-3a07-4f89-9b29-16cec3710393" entered phase "Released"
E1123 17:36:49.379553       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5285/default: secrets "default-token-lnk29" is forbidden: unable to create new content in namespace azuredisk-5285 because it is being terminated
2021/11/23 17:36:50 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 9 of 53 Specs in 1552.797 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 44 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 125 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 19 lines ...