This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: feat: turn on CSIMigrationAzureFile by default on 1.24
ResultABORTED
Tests 0 failed / 22 succeeded
Started2021-11-17 02:35
Elapsed1h4m
Revision6b42f4ca40261baebff87a30e3c9a11db8562ec1
Refs 105070

No Test Failures!


Show 22 Passed Tests

Show 44 Skipped Tests

Error lines from build-log.txt

... skipping 257 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160
STEP: Creating a kubernetes client
Nov 17 03:06:29.254: INFO: >>> kubeConfig: /root/tmp1002526589/kubeconfig/kubeconfig.canadacentral.json
STEP: Building a namespace api object, basename azuredisk
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-3274
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.432 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
... skipping 86 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov 17 03:06:32.099: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vsgqs" in namespace "azuredisk-5356" to be "Succeeded or Failed"
Nov 17 03:06:32.144: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Pending", Reason="", readiness=false. Elapsed: 44.225209ms
Nov 17 03:06:34.178: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078664986s
Nov 17 03:06:36.212: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112330028s
Nov 17 03:06:38.247: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14736969s
Nov 17 03:06:40.281: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18198954s
Nov 17 03:06:42.316: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216894823s
... skipping 31 lines ...
Nov 17 03:07:47.432: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Running", Reason="", readiness=true. Elapsed: 1m15.332187083s
Nov 17 03:07:49.466: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Running", Reason="", readiness=true. Elapsed: 1m17.366642366s
Nov 17 03:07:51.502: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Running", Reason="", readiness=true. Elapsed: 1m19.402298927s
Nov 17 03:07:53.537: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Running", Reason="", readiness=true. Elapsed: 1m21.437456409s
Nov 17 03:07:55.571: INFO: Pod "azuredisk-volume-tester-vsgqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m23.47123083s
STEP: Saw pod success
Nov 17 03:07:55.571: INFO: Pod "azuredisk-volume-tester-vsgqs" satisfied condition "Succeeded or Failed"
Nov 17 03:07:55.571: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-vsgqs"
Nov 17 03:07:55.657: INFO: Pod azuredisk-volume-tester-vsgqs has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-vsgqs in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
... skipping 162 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Nov 17 03:08:41.921: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-drpht" in namespace "azuredisk-2451" to be "Error status code"
Nov 17 03:08:41.962: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Pending", Reason="", readiness=false. Elapsed: 40.660251ms
Nov 17 03:08:43.996: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074839314s
Nov 17 03:08:46.032: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110871052s
Nov 17 03:08:48.065: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144370128s
Nov 17 03:08:50.100: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17861714s
Nov 17 03:08:52.133: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212371715s
... skipping 87 lines ...
Nov 17 03:11:51.227: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Running", Reason="", readiness=true. Elapsed: 3m9.306477765s
Nov 17 03:11:53.262: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Running", Reason="", readiness=true. Elapsed: 3m11.341054194s
Nov 17 03:11:55.297: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Running", Reason="", readiness=true. Elapsed: 3m13.375672986s
Nov 17 03:11:57.331: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Running", Reason="", readiness=true. Elapsed: 3m15.410455641s
Nov 17 03:11:59.366: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Running", Reason="", readiness=true. Elapsed: 3m17.444824904s
Nov 17 03:12:01.400: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Running", Reason="", readiness=true. Elapsed: 3m19.479465962s
Nov 17 03:12:03.443: INFO: Pod "azuredisk-volume-tester-drpht": Phase="Failed", Reason="", readiness=false. Elapsed: 3m21.522430593s
STEP: Saw pod failure
Nov 17 03:12:03.443: INFO: Pod "azuredisk-volume-tester-drpht" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Nov 17 03:12:03.522: INFO: deleting Pod "azuredisk-2451"/"azuredisk-volume-tester-drpht"
Nov 17 03:12:03.590: INFO: Pod azuredisk-volume-tester-drpht has the following logs: out-file : Access to the path 'C:\mnt\test-1\data' is denied.
At line:1 char:1
+ echo $null >> C:\mnt\test-1\data
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 419 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov 17 03:21:53.386: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ch696" in namespace "azuredisk-552" to be "Succeeded or Failed"
Nov 17 03:21:53.426: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 40.693659ms
Nov 17 03:21:55.461: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075374604s
Nov 17 03:21:57.496: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110230187s
Nov 17 03:21:59.539: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153271785s
Nov 17 03:22:01.573: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187570387s
Nov 17 03:22:03.607: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221523372s
... skipping 30 lines ...
Nov 17 03:23:06.696: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.310673765s
Nov 17 03:23:08.731: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.345735955s
Nov 17 03:23:10.766: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.379955287s
Nov 17 03:23:12.800: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Running", Reason="", readiness=true. Elapsed: 1m19.414654824s
Nov 17 03:23:14.835: INFO: Pod "azuredisk-volume-tester-ch696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m21.449725247s
STEP: Saw pod success
Nov 17 03:23:14.835: INFO: Pod "azuredisk-volume-tester-ch696" satisfied condition "Succeeded or Failed"
Nov 17 03:23:14.835: INFO: deleting Pod "azuredisk-552"/"azuredisk-volume-tester-ch696"
Nov 17 03:23:14.911: INFO: Pod azuredisk-volume-tester-ch696 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-ch696 in namespace azuredisk-552
... skipping 171 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov 17 03:24:50.949: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-shnjq" in namespace "azuredisk-9267" to be "Succeeded or Failed"
Nov 17 03:24:50.984: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 35.497092ms
Nov 17 03:24:53.019: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070514059s
Nov 17 03:24:55.056: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106751961s
Nov 17 03:24:57.090: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140933938s
Nov 17 03:24:59.125: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176337066s
Nov 17 03:25:01.160: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210658283s
... skipping 30 lines ...
Nov 17 03:26:04.244: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.295084941s
Nov 17 03:26:06.278: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.329542836s
Nov 17 03:26:08.313: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.364032374s
Nov 17 03:26:10.347: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.398032737s
Nov 17 03:26:12.382: INFO: Pod "azuredisk-volume-tester-shnjq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m21.433081857s
STEP: Saw pod success
Nov 17 03:26:12.382: INFO: Pod "azuredisk-volume-tester-shnjq" satisfied condition "Succeeded or Failed"
Nov 17 03:26:12.382: INFO: deleting Pod "azuredisk-9267"/"azuredisk-volume-tester-shnjq"
Nov 17 03:26:12.459: INFO: Pod azuredisk-volume-tester-shnjq has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-shnjq in namespace azuredisk-9267
STEP: validating provisioned PV
STEP: checking the PV
... skipping 521 lines ...
I1117 03:02:08.172549       1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=100, bucket=1000
I1117 03:02:08.172559       1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000
I1117 03:02:08.172571       1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=100, bucket=1000
I1117 03:02:08.172578       1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000
I1117 03:02:08.172637       1 azure.go:742] Setting up informers for Azure cloud provider
I1117 03:02:08.173889       1 shared_informer.go:240] Waiting for caches to sync for tokens
W1117 03:02:08.239404       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I1117 03:02:08.239430       1 controllermanager.go:576] Starting "garbagecollector"
I1117 03:02:08.261438       1 garbagecollector.go:146] Starting garbage collector controller
I1117 03:02:08.261458       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1117 03:02:08.261483       1 controllermanager.go:605] Started "garbagecollector"
I1117 03:02:08.261496       1 controllermanager.go:576] Starting "job"
I1117 03:02:08.261486       1 graph_builder.go:289] GraphBuilder running
... skipping 92 lines ...
I1117 03:02:09.802778       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I1117 03:02:09.802809       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I1117 03:02:09.802952       1 controllermanager.go:605] Started "attachdetach"
I1117 03:02:09.802967       1 controllermanager.go:576] Starting "daemonset"
I1117 03:02:09.803014       1 attach_detach_controller.go:328] Starting attach detach controller
I1117 03:02:09.803023       1 shared_informer.go:240] Waiting for caches to sync for attach detach
W1117 03:02:09.803804       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-19631844-0" does not exist
I1117 03:02:09.929913       1 controllermanager.go:605] Started "daemonset"
I1117 03:02:09.929938       1 controllermanager.go:576] Starting "csrsigning"
I1117 03:02:09.929980       1 daemon_controller.go:284] Starting daemon sets controller
I1117 03:02:09.929988       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
I1117 03:02:10.082439       1 dynamic_serving_content.go:112] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/certs/ca.crt::/etc/kubernetes/certs/ca.key"
I1117 03:02:10.094023       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
... skipping 177 lines ...
I1117 03:02:52.827395       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:02:57.828124       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:01.790218       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-6886cb4867" need=1 creating=1
I1117 03:03:01.790869       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-77f47bb58d" need=1 creating=1
I1117 03:03:01.793336       1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-77f47bb58d to 1"
I1117 03:03:01.793364       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6886cb4867 to 1"
I1117 03:03:01.840355       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:03:01.841621       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:03:01.911206       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:03:01.930351       1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-77f47bb58d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-77f47bb58d-wmps8"
I1117 03:03:01.987314       1 event.go:294] "Event occurred" object="kube-system/coredns-6886cb4867" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6886cb4867-42w54"
I1117 03:03:02.829242       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:06.264810       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2rspj"
I1117 03:03:06.334054       1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-c8mkz"
I1117 03:03:07.829996       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:11.639487       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azuredisk-controller-587dbdf9c4" need=1 creating=1
I1117 03:03:11.640820       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azuredisk-controller-587dbdf9c4 to 1"
I1117 03:03:11.670321       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-587dbdf9c4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-587dbdf9c4-ph4dl"
I1117 03:03:11.715272       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:03:12.177148       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7749fb4bbc" need=1 creating=1
I1117 03:03:12.190485       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7749fb4bbc to 1"
I1117 03:03:12.198081       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7749fb4bbc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7749fb4bbc-22bhr"
I1117 03:03:12.279675       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:03:12.691875       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-79f4ccbf89 to 1"
I1117 03:03:12.692312       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/metrics-server-79f4ccbf89" need=1 creating=1
I1117 03:03:12.721822       1 event.go:294] "Event occurred" object="kube-system/metrics-server-79f4ccbf89" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-79f4ccbf89-m7wnb"
I1117 03:03:12.740233       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:03:12.831213       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
E1117 03:03:12.878172       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1117 03:03:13.412143       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1117 03:03:17.831471       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:22.831915       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:27.832292       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:32.832844       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:37.833845       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:42.835070       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
E1117 03:03:42.923447       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1117 03:03:43.444125       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1117 03:03:47.835912       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:52.836748       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:03:57.837958       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
W1117 03:04:00.922230       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1963k8s000" does not exist
I1117 03:04:00.937953       1 ttl_controller.go:276] "Changed ttl annotation" node="1963k8s000" new_ttl="0s"
I1117 03:04:02.842580       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:04:02.842614       1 node_lifecycle_controller.go:769] Controller observed a new Node: "1963k8s000"
I1117 03:04:02.842633       1 controller_utils.go:172] "Recording event message for node" event="Registered Node 1963k8s000 in Controller" node="1963k8s000"
I1117 03:04:02.842959       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::1
I1117 03:04:02.843343       1 event.go:294] "Event occurred" object="1963k8s000" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 1963k8s000 event: Registered Node 1963k8s000 in Controller"
... skipping 17 lines ...
I1117 03:04:27.848396       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:04:32.849062       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:04:32.849225       1 node_lifecycle_controller.go:1213] Controller detected that zone canadacentral::0 is now in state .
I1117 03:04:42.851297       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:04:47.851705       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:04:52.852605       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
W1117 03:04:55.411400       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1963k8s001" does not exist
I1117 03:04:55.426977       1 ttl_controller.go:276] "Changed ttl annotation" node="1963k8s001" new_ttl="0s"
I1117 03:04:57.853153       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: canadacentral::0
I1117 03:04:57.853186       1 node_lifecycle_controller.go:769] Controller observed a new Node: "1963k8s001"
I1117 03:04:57.853206       1 controller_utils.go:172] "Recording event message for node" event="Registered Node 1963k8s001 in Controller" node="1963k8s001"
W1117 03:04:57.853362       1 node_lifecycle_controller.go:1012] Missing timestamp for Node 1963k8s001. Assuming now as a timestamp.
I1117 03:04:57.853397       1 node_lifecycle_controller.go:868] Node 1963k8s001 is NotReady as of 2021-11-17 03:04:57.853388219 +0000 UTC m=+170.654738403. Adding it to the Taint queue.
... skipping 7 lines ...
I1117 03:05:05.828229       1 controller_utils.go:122] "Update ready status of pods on node" node="1963k8s001"
I1117 03:05:05.830070       1 controller_utils.go:122] "Update ready status of pods on node" node="1963k8s001"
I1117 03:05:07.869760       1 node_lifecycle_controller.go:892] Node 1963k8s001 is healthy again, removing all taints
I1117 03:05:07.870873       1 node_lifecycle_controller.go:1213] Controller detected that zone canadacentral::0 is now in state Normal.
I1117 03:06:32.051275       1 event.go:294] "Event occurred" object="azuredisk-5356/pvc-9hmkf" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:06:32.099507       1 event.go:294] "Event occurred" object="azuredisk-5356/pvc-9hmkf" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1117 03:06:33.925223       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1318/default: secrets "default-token-j7r9r" is forbidden: unable to create new content in namespace azuredisk-1318 because it is being terminated
E1117 03:06:34.297813       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-694/default: secrets "default-token-km2q8" is forbidden: unable to create new content in namespace azuredisk-694 because it is being terminated
E1117 03:06:34.731357       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3274/default: secrets "default-token-dls28" is forbidden: unable to create new content in namespace azuredisk-3274 because it is being terminated
E1117 03:06:35.981126       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5541/default: secrets "default-token-frzdm" is forbidden: unable to create new content in namespace azuredisk-5541 because it is being terminated
I1117 03:06:38.628155       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8081
I1117 03:06:39.006423       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1318
I1117 03:06:39.403612       1 namespace_controller.go:185] Namespace has been deleted azuredisk-694
I1117 03:06:39.847527       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3274
I1117 03:06:40.265565       1 namespace_controller.go:185] Namespace has been deleted azuredisk-495
I1117 03:06:40.687019       1 namespace_controller.go:185] Namespace has been deleted azuredisk-9947
... skipping 8 lines ...
I1117 03:07:55.849295       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-5356/pvc-9hmkf"
I1117 03:07:55.860134       1 pv_controller.go:648] volume "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5" is released and reclaim policy "Delete" will be executed
I1117 03:07:55.870998       1 pv_controller.go:887] volume "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5" entered phase "Released"
I1117 03:08:07.031636       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5") on node "1963k8s000" 
I1117 03:08:07.063206       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5") on node "1963k8s000" 
I1117 03:08:23.122073       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5") on node "1963k8s000" 
E1117 03:08:32.261906       1 pv_protection_controller.go:114] PV pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5": the object has been modified; please apply your changes to the latest version and try again
I1117 03:08:32.269195       1 pv_controller_base.go:533] deletion of claim "azuredisk-5356/pvc-9hmkf" was already processed
E1117 03:08:32.270980       1 pv_protection_controller.go:114] PV pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-953c65b6-c42d-4a17-9c2e-a7892815b6c5, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 91a023c0-e772-45cf-91f4-91e8d5dd2527, UID in object meta: 
I1117 03:08:41.862364       1 event.go:294] "Event occurred" object="azuredisk-2451/pvc-8zjmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:08:41.913212       1 event.go:294] "Event occurred" object="azuredisk-2451/pvc-8zjmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:08:41.913375       1 event.go:294] "Event occurred" object="azuredisk-2451/pvc-8zjmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1117 03:08:42.545793       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6413/default: secrets "default-token-xkmjz" is forbidden: unable to create new content in namespace azuredisk-6413 because it is being terminated
I1117 03:08:42.769023       1 event.go:294] "Event occurred" object="azuredisk-2451/pvc-8zjmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1117 03:08:43.703680       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4147/default: secrets "default-token-gcb7k" is forbidden: unable to create new content in namespace azuredisk-4147 because it is being terminated
E1117 03:08:45.899136       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8705/default: secrets "default-token-dnc2z" is forbidden: unable to create new content in namespace azuredisk-8705 because it is being terminated
I1117 03:08:46.554593       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5356
I1117 03:08:47.630100       1 namespace_controller.go:185] Namespace has been deleted azuredisk-6413
I1117 03:08:48.785479       1 namespace_controller.go:185] Namespace has been deleted azuredisk-4147
I1117 03:08:49.910572       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1957
I1117 03:08:51.111383       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8705
I1117 03:08:57.769124       1 event.go:294] "Event occurred" object="azuredisk-2451/pvc-8zjmg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
... skipping 55 lines ...
I1117 03:15:23.131536       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-9828/pvc-52zhj"
I1117 03:15:23.153389       1 pv_controller.go:648] volume "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95" is released and reclaim policy "Delete" will be executed
I1117 03:15:23.162160       1 pv_controller.go:887] volume "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95" entered phase "Released"
I1117 03:15:25.923662       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95") on node "1963k8s000" 
I1117 03:15:25.938282       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95") on node "1963k8s000" 
I1117 03:15:47.530047       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95") on node "1963k8s000" 
E1117 03:15:59.383105       1 pv_protection_controller.go:114] PV pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95": the object has been modified; please apply your changes to the latest version and try again
I1117 03:15:59.394565       1 pv_controller_base.go:533] deletion of claim "azuredisk-9828/pvc-52zhj" was already processed
E1117 03:15:59.396961       1 pv_protection_controller.go:114] PV pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-85adceac-b3b9-4aa7-9c0f-de305f11fc95, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f4d59de6-9751-48cf-92c6-c95129fd1fa7, UID in object meta: 
I1117 03:16:03.602603       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-9828/azuredisk-volume-tester-mjgch" PVC="azuredisk-9828/pvc-gzjkc"
I1117 03:16:03.602797       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-9828/pvc-gzjkc"
I1117 03:16:05.138625       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-9828/azuredisk-volume-tester-mjgch" PVC="azuredisk-9828/pvc-gzjkc"
I1117 03:16:05.138699       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-9828/pvc-gzjkc"
I1117 03:16:07.547563       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-9828/azuredisk-volume-tester-mjgch" PVC="azuredisk-9828/pvc-gzjkc"
I1117 03:16:07.547596       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-9828/pvc-gzjkc"
... skipping 15 lines ...
I1117 03:16:48.959590       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-9828/pvc-prbpm"
I1117 03:16:48.969616       1 pv_controller.go:648] volume "pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3" is released and reclaim policy "Delete" will be executed
I1117 03:16:48.973909       1 pv_controller.go:887] volume "pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3" entered phase "Released"
I1117 03:16:59.886585       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3") on node "1963k8s001" 
I1117 03:16:59.894459       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3") on node "1963k8s001" 
I1117 03:17:16.037491       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3") on node "1963k8s001" 
E1117 03:17:25.307531       1 pv_protection_controller.go:114] PV pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-30733f02-94ec-4bb9-9b1d-3955c0f7ddf3": the object has been modified; please apply your changes to the latest version and try again
I1117 03:17:25.312435       1 pv_controller_base.go:533] deletion of claim "azuredisk-9828/pvc-prbpm" was already processed
I1117 03:17:31.192783       1 event.go:294] "Event occurred" object="azuredisk-1563/pvc-t69pv" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:17:31.231768       1 event.go:294] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-z9zjl" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azuredisk-volume-tester-z9zjl-5577cb75fb to 1"
I1117 03:17:31.232162       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb" need=1 creating=1
I1117 03:17:31.246268       1 event.go:294] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-z9zjl-5577cb75fb-gqvv9"
I1117 03:17:31.260262       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-z9zjl" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-z9zjl\": the object has been modified; please apply your changes to the latest version and try again"
I1117 03:17:31.270898       1 event.go:294] "Event occurred" object="azuredisk-1563/pvc-t69pv" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:17:33.621217       1 pv_controller.go:887] volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" entered phase "Bound"
I1117 03:17:33.621272       1 pv_controller.go:990] volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" bound to claim "azuredisk-1563/pvc-t69pv"
I1117 03:17:33.638343       1 pv_controller.go:831] claim "azuredisk-1563/pvc-t69pv" entered phase "Bound"
I1117 03:17:34.316591       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") from node "1963k8s001" 
I1117 03:17:40.362346       1 namespace_controller.go:185] Namespace has been deleted azuredisk-9828
I1117 03:17:50.498674       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") from node "1963k8s001" 
I1117 03:17:50.498989       1 event.go:294] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb-gqvv9" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f\" "
I1117 03:18:45.182473       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb" need=1 creating=1
I1117 03:18:45.200385       1 event.go:294] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-z9zjl-5577cb75fb-5z5b2"
W1117 03:18:45.296375       1 reconciler.go:385] Multi-Attach error for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") from node "1963k8s000" Volume is already used by pods azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb-gqvv9 on node 1963k8s001
I1117 03:18:45.296515       1 event.go:294] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb-5z5b2" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f\" Volume is already used by pod(s) azuredisk-volume-tester-z9zjl-5577cb75fb-gqvv9"
I1117 03:19:01.782604       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") on node "1963k8s001" 
I1117 03:19:01.801422       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") on node "1963k8s001" 
I1117 03:19:17.805961       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") on node "1963k8s001" 
I1117 03:19:17.888162       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") from node "1963k8s000" 
I1117 03:19:29.145246       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") from node "1963k8s000" 
I1117 03:19:29.145431       1 event.go:294] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-z9zjl-5577cb75fb-5z5b2" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f\" "
... skipping 13 lines ...
I1117 03:20:21.599658       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-1563/pvc-t69pv"
I1117 03:20:21.609362       1 pv_controller.go:648] volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" is released and reclaim policy "Delete" will be executed
I1117 03:20:21.618975       1 pv_controller.go:887] volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" entered phase "Released"
I1117 03:20:33.129604       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") on node "1963k8s000" 
I1117 03:20:33.153063       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") on node "1963k8s000" 
I1117 03:20:55.030179       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f") on node "1963k8s000" 
E1117 03:21:29.835239       1 pv_protection_controller.go:114] PV pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-8ac6d6d5-32b6-4292-87aa-472b04661e2f": the object has been modified; please apply your changes to the latest version and try again
I1117 03:21:29.841299       1 pv_controller_base.go:533] deletion of claim "azuredisk-1563/pvc-t69pv" was already processed
I1117 03:21:34.212418       1 event.go:294] "Event occurred" object="azuredisk-1577/pvc-5rljg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:21:36.593608       1 pv_controller.go:887] volume "pvc-5e1c6f5d-791f-4afe-82c0-3e2323af6002" entered phase "Bound"
I1117 03:21:36.593643       1 pv_controller.go:990] volume "pvc-5e1c6f5d-791f-4afe-82c0-3e2323af6002" bound to claim "azuredisk-1577/pvc-5rljg"
I1117 03:21:36.618545       1 pv_controller.go:831] claim "azuredisk-1577/pvc-5rljg" entered phase "Bound"
I1117 03:21:38.424899       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-1577/pvc-5rljg"
... skipping 11 lines ...
I1117 03:21:55.787774       1 pv_controller.go:887] volume "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc" entered phase "Bound"
I1117 03:21:55.787815       1 pv_controller.go:990] volume "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc" bound to claim "azuredisk-552/pvc-pl2mf"
I1117 03:21:55.813968       1 pv_controller.go:831] claim "azuredisk-552/pvc-pl2mf" entered phase "Bound"
I1117 03:21:55.868057       1 pv_controller.go:887] volume "pvc-b3c57d26-6cfe-4677-a4f4-d708ff240e08" entered phase "Bound"
I1117 03:21:55.868098       1 pv_controller.go:990] volume "pvc-b3c57d26-6cfe-4677-a4f4-d708ff240e08" bound to claim "azuredisk-552/pvc-l8qdt"
I1117 03:21:55.884583       1 pv_controller.go:831] claim "azuredisk-552/pvc-l8qdt" entered phase "Bound"
E1117 03:21:56.036863       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3033/default: secrets "default-token-mqtcr" is forbidden: unable to create new content in namespace azuredisk-3033 because it is being terminated
I1117 03:21:56.151811       1 pv_controller.go:887] volume "pvc-4e780eff-ef63-42ec-a647-5c2d34814a4f" entered phase "Bound"
I1117 03:21:56.152050       1 pv_controller.go:990] volume "pvc-4e780eff-ef63-42ec-a647-5c2d34814a4f" bound to claim "azuredisk-552/pvc-5tgqm"
I1117 03:21:56.167956       1 pv_controller.go:831] claim "azuredisk-552/pvc-5tgqm" entered phase "Bound"
I1117 03:21:56.449602       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc") from node "1963k8s001" 
I1117 03:21:56.449653       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-b3c57d26-6cfe-4677-a4f4-d708ff240e08" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-b3c57d26-6cfe-4677-a4f4-d708ff240e08") from node "1963k8s001" 
I1117 03:21:56.449677       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-4e780eff-ef63-42ec-a647-5c2d34814a4f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-4e780eff-ef63-42ec-a647-5c2d34814a4f") from node "1963k8s001" 
E1117 03:21:57.162198       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9336/default: secrets "default-token-h28hb" is forbidden: unable to create new content in namespace azuredisk-9336 because it is being terminated
I1117 03:21:57.539730       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc") from node "1963k8s001" 
I1117 03:21:57.539933       1 event.go:294] "Event occurred" object="azuredisk-552/azuredisk-volume-tester-ch696" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc\" "
I1117 03:21:58.848077       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1577
I1117 03:21:59.950196       1 namespace_controller.go:185] Namespace has been deleted azuredisk-953
I1117 03:22:01.103144       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3033
I1117 03:22:02.283124       1 namespace_controller.go:185] Namespace has been deleted azuredisk-9336
... skipping 18 lines ...
I1117 03:24:25.772639       1 pv_controller.go:648] volume "pvc-b3c57d26-6cfe-4677-a4f4-d708ff240e08" is released and reclaim policy "Delete" will be executed
I1117 03:24:25.781811       1 pv_controller.go:887] volume "pvc-b3c57d26-6cfe-4677-a4f4-d708ff240e08" entered phase "Released"
I1117 03:24:31.073640       1 pv_controller_base.go:533] deletion of claim "azuredisk-552/pvc-l8qdt" was already processed
I1117 03:24:36.032911       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-552/pvc-pl2mf"
I1117 03:24:36.043424       1 pv_controller.go:648] volume "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc" is released and reclaim policy "Delete" will be executed
I1117 03:24:36.050740       1 pv_controller.go:887] volume "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc" entered phase "Released"
E1117 03:24:41.339939       1 pv_protection_controller.go:114] PV pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc failed with : Operation cannot be fulfilled on persistentvolumes "pvc-a877a7c2-34b3-4e67-9ec6-2344595de7cc": the object has been modified; please apply your changes to the latest version and try again
I1117 03:24:41.347796       1 pv_controller_base.go:533] deletion of claim "azuredisk-552/pvc-pl2mf" was already processed
I1117 03:24:50.757737       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-lhswn" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:24:50.826834       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-6qb2l" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:24:50.894936       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-rlxtc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:24:50.994087       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-6qb2l" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:24:50.994350       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-6qb2l" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:24:50.994782       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-lhswn" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:24:51.003640       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-rlxtc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:24:51.006081       1 event.go:294] "Event occurred" object="azuredisk-9267/pvc-rlxtc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1117 03:24:51.359731       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-552/default: secrets "default-token-b9cjv" is forbidden: unable to create new content in namespace azuredisk-552 because it is being terminated
E1117 03:24:52.541795       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1351/default: secrets "default-token-rpvlk" is forbidden: unable to create new content in namespace azuredisk-1351 because it is being terminated
I1117 03:24:53.370437       1 pv_controller.go:887] volume "pvc-c61a2f18-99d3-4719-ba20-2287959102cb" entered phase "Bound"
I1117 03:24:53.372029       1 pv_controller.go:990] volume "pvc-c61a2f18-99d3-4719-ba20-2287959102cb" bound to claim "azuredisk-9267/pvc-lhswn"
I1117 03:24:53.400920       1 pv_controller.go:831] claim "azuredisk-9267/pvc-lhswn" entered phase "Bound"
I1117 03:24:53.408694       1 pv_controller.go:887] volume "pvc-f3f3ec8f-1194-4807-8a80-01135fd729c6" entered phase "Bound"
I1117 03:24:53.410135       1 pv_controller.go:990] volume "pvc-f3f3ec8f-1194-4807-8a80-01135fd729c6" bound to claim "azuredisk-9267/pvc-rlxtc"
I1117 03:24:53.426195       1 pv_controller.go:831] claim "azuredisk-9267/pvc-rlxtc" entered phase "Bound"
I1117 03:24:53.436360       1 pv_controller.go:887] volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" entered phase "Bound"
I1117 03:24:53.436753       1 pv_controller.go:990] volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" bound to claim "azuredisk-9267/pvc-6qb2l"
I1117 03:24:53.448071       1 pv_controller.go:831] claim "azuredisk-9267/pvc-6qb2l" entered phase "Bound"
E1117 03:24:53.633219       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-57r7p" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I1117 03:24:54.033563       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-d4571db7-cfde-492a-9f06-79032cb6351d") from node "1963k8s001" 
I1117 03:24:54.033606       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f3f3ec8f-1194-4807-8a80-01135fd729c6" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-f3f3ec8f-1194-4807-8a80-01135fd729c6") from node "1963k8s001" 
I1117 03:24:54.033665       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-c61a2f18-99d3-4719-ba20-2287959102cb" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-c61a2f18-99d3-4719-ba20-2287959102cb") from node "1963k8s001" 
E1117 03:24:54.759387       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8553/default: secrets "default-token-jqp64" is forbidden: unable to create new content in namespace azuredisk-8553 because it is being terminated
I1117 03:24:55.115023       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-d4571db7-cfde-492a-9f06-79032cb6351d") from node "1963k8s001" 
I1117 03:24:55.115366       1 event.go:294] "Event occurred" object="azuredisk-9267/azuredisk-volume-tester-shnjq" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-d4571db7-cfde-492a-9f06-79032cb6351d\" "
I1117 03:24:56.644942       1 namespace_controller.go:185] Namespace has been deleted azuredisk-552
I1117 03:24:57.616082       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1351
I1117 03:24:58.699875       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3410
I1117 03:24:59.839820       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8553
... skipping 14 lines ...
I1117 03:27:03.524094       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-d4571db7-cfde-492a-9f06-79032cb6351d") on node "1963k8s001" 
I1117 03:27:03.709176       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-f3f3ec8f-1194-4807-8a80-01135fd729c6" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-f3f3ec8f-1194-4807-8a80-01135fd729c6") on node "1963k8s001" 
I1117 03:27:20.917905       1 pv_controller_base.go:533] deletion of claim "azuredisk-9267/pvc-rlxtc" was already processed
I1117 03:27:23.296859       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-9267/pvc-6qb2l"
I1117 03:27:23.307709       1 pv_controller.go:648] volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" is released and reclaim policy "Delete" will be executed
I1117 03:27:23.312074       1 pv_controller.go:887] volume "pvc-d4571db7-cfde-492a-9f06-79032cb6351d" entered phase "Released"
E1117 03:27:28.543167       1 pv_protection_controller.go:114] PV pvc-d4571db7-cfde-492a-9f06-79032cb6351d failed with : Operation cannot be fulfilled on persistentvolumes "pvc-d4571db7-cfde-492a-9f06-79032cb6351d": the object has been modified; please apply your changes to the latest version and try again
I1117 03:27:28.561234       1 pv_controller_base.go:533] deletion of claim "azuredisk-9267/pvc-6qb2l" was already processed
I1117 03:27:32.001023       1 deployment_controller.go:583] "Deployment has been deleted" deployment="azuredisk-1563/azuredisk-volume-tester-z9zjl"
I1117 03:27:33.567225       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-9267/pvc-lhswn"
I1117 03:27:33.576182       1 pv_controller.go:648] volume "pvc-c61a2f18-99d3-4719-ba20-2287959102cb" is released and reclaim policy "Delete" will be executed
I1117 03:27:33.581206       1 pv_controller.go:887] volume "pvc-c61a2f18-99d3-4719-ba20-2287959102cb" entered phase "Released"
E1117 03:27:38.868178       1 pv_protection_controller.go:114] PV pvc-c61a2f18-99d3-4719-ba20-2287959102cb failed with : Operation cannot be fulfilled on persistentvolumes "pvc-c61a2f18-99d3-4719-ba20-2287959102cb": the object has been modified; please apply your changes to the latest version and try again
I1117 03:27:38.873243       1 pv_controller_base.go:533] deletion of claim "azuredisk-9267/pvc-lhswn" was already processed
I1117 03:27:47.391091       1 event.go:294] "Event occurred" object="azuredisk-7175/pvc-gqxkt" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:27:47.445146       1 event.go:294] "Event occurred" object="azuredisk-7175/pvc-gqxkt" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E1117 03:27:49.084377       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9267/default: secrets "default-token-vvxp9" is forbidden: unable to create new content in namespace azuredisk-9267 because it is being terminated
I1117 03:27:49.783715       1 pv_controller.go:887] volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" entered phase "Bound"
I1117 03:27:49.783801       1 pv_controller.go:990] volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" bound to claim "azuredisk-7175/pvc-gqxkt"
I1117 03:27:49.800504       1 pv_controller.go:831] claim "azuredisk-7175/pvc-gqxkt" entered phase "Bound"
E1117 03:27:50.222409       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5802/default: secrets "default-token-cw97j" is forbidden: unable to create new content in namespace azuredisk-5802 because it is being terminated
I1117 03:27:50.550659       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-950c1c64-4442-45c1-8563-64d66d6cf80f") from node "1963k8s001" 
E1117 03:27:51.442824       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-493/default: secrets "default-token-979gv" is forbidden: unable to create new content in namespace azuredisk-493 because it is being terminated
I1117 03:27:54.151229       1 namespace_controller.go:185] Namespace has been deleted azuredisk-9267
I1117 03:27:55.358279       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5802
I1117 03:27:56.474192       1 namespace_controller.go:185] Namespace has been deleted azuredisk-493
I1117 03:28:01.999426       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-950c1c64-4442-45c1-8563-64d66d6cf80f") from node "1963k8s001" 
I1117 03:28:01.999939       1 event.go:294] "Event occurred" object="azuredisk-7175/azuredisk-volume-tester-d985b" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-950c1c64-4442-45c1-8563-64d66d6cf80f\" "
I1117 03:28:54.023897       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-950c1c64-4442-45c1-8563-64d66d6cf80f") on node "1963k8s001" 
I1117 03:28:54.037878       1 operation_generator.go:1620] Verified volume is safe to detach for volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-950c1c64-4442-45c1-8563-64d66d6cf80f") on node "1963k8s001" 
I1117 03:29:10.205114       1 operation_generator.go:527] DetachVolume.Detach succeeded for volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-950c1c64-4442-45c1-8563-64d66d6cf80f") on node "1963k8s001" 
I1117 03:29:12.566329       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-7175/pvc-gqxkt"
I1117 03:29:12.577803       1 pv_controller.go:648] volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" is released and reclaim policy "Delete" will be executed
I1117 03:29:12.585769       1 pv_controller.go:887] volume "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f" entered phase "Released"
E1117 03:29:17.804444       1 pv_protection_controller.go:114] PV pvc-950c1c64-4442-45c1-8563-64d66d6cf80f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-950c1c64-4442-45c1-8563-64d66d6cf80f": the object has been modified; please apply your changes to the latest version and try again
I1117 03:29:17.814979       1 pv_controller_base.go:533] deletion of claim "azuredisk-7175/pvc-gqxkt" was already processed
I1117 03:29:23.947124       1 event.go:294] "Event occurred" object="azuredisk-1528/pvc-azuredisk-volume-tester-k9jjv-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1117 03:29:23.947817       1 event.go:294] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-k9jjv" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim pvc-azuredisk-volume-tester-k9jjv-0 Pod azuredisk-volume-tester-k9jjv-0 in StatefulSet azuredisk-volume-tester-k9jjv success"
I1117 03:29:23.956482       1 event.go:294] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-k9jjv" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod azuredisk-volume-tester-k9jjv-0 in StatefulSet azuredisk-volume-tester-k9jjv successful"
I1117 03:29:23.982732       1 event.go:294] "Event occurred" object="azuredisk-1528/pvc-azuredisk-volume-tester-k9jjv-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:29:23.982772       1 event.go:294] "Event occurred" object="azuredisk-1528/pvc-azuredisk-volume-tester-k9jjv-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I1117 03:29:26.373521       1 pv_controller.go:887] volume "pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd" entered phase "Bound"
I1117 03:29:26.374676       1 pv_controller.go:990] volume "pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd" bound to claim "azuredisk-1528/pvc-azuredisk-volume-tester-k9jjv-0"
I1117 03:29:26.400794       1 pv_controller.go:831] claim "azuredisk-1528/pvc-azuredisk-volume-tester-k9jjv-0" entered phase "Bound"
I1117 03:29:27.042602       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd") from node "1963k8s000" 
E1117 03:29:27.969471       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7175/default: secrets "default-token-gn8wn" is forbidden: unable to create new content in namespace azuredisk-7175 because it is being terminated
I1117 03:29:33.017027       1 namespace_controller.go:185] Namespace has been deleted azuredisk-7175
I1117 03:29:43.144758       1 operation_generator.go:412] AttachVolume.Attach succeeded for volume "pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-r1vxcgbl/providers/Microsoft.Compute/disks/pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd") from node "1963k8s000" 
I1117 03:29:43.144912       1 event.go:294] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-k9jjv-0" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-a9ae2c8a-7174-447c-a1ea-2ecf0bb6d1cd\" "
I1117 03:30:38.005493       1 event.go:294] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-k9jjv" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod azuredisk-volume-tester-k9jjv-0 in StatefulSet azuredisk-volume-tester-k9jjv successful"
I1117 03:31:37.589479       1 garbagecollector.go:468] "Processing object" object="azuredisk-1528/azuredisk-volume-tester-k9jjv-d4f8f94cf" objectUID=a0e20168-2eb8-44d1-9fa4-3075763328a8 kind="ControllerRevision" virtual=false
I1117 03:31:37.589445       1 stateful_set.go:440] StatefulSet has been deleted azuredisk-1528/azuredisk-volume-tester-k9jjv
... skipping 2 lines ...
I1117 03:31:37.620664       1 garbagecollector.go:580] "Deleting object" object="azuredisk-1528/azuredisk-volume-tester-k9jjv-d4f8f94cf" objectUID=a0e20168-2eb8-44d1-9fa4-3075763328a8 kind="ControllerRevision" propagationPolicy=Background
2021/11/17 03:31:41 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 9 of 53 Specs in 1514.261 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 44 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 125 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 46 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 8 lines ...
Warning: Permanently added '1963k8s001' (ECDSA) to the list of known hosts.
Collecting logs for vm 1963k8s002
Testing connection to host 1963k8s002.
Warning: Permanently added 'kubetest-r1vxcgbl.canadacentral.cloudapp.azure.com,20.151.243.152' (ECDSA) to the list of known hosts.

Authorized uses only. All activity may be monitored and reported.
channel 0: open failed: connect failed: Name or service not known

stdio forwarding failed

ssh_exchange_identification: Connection closed by remote host
2021/11/17 03:33:12 process.go:155: Step 'bash -c /root/tmp1002526589/win-ci-logs-collector.sh kubetest-r1vxcgbl.canadacentral.cloudapp.azure.com /root/tmp1002526589 /etc/ssh-key-secret/ssh-private' finished in 1m2.07928333s
2021/11/17 03:33:12 aksengine.go:1141: Deleting resource group: kubetest-r1vxcgbl.
2021/11/17 03:39:18 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/11/17 03:39:18 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"