This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnixpanic: Fix a panic for in-tree drivers that partialy support Block volume metrics
ResultABORTED
Tests 0 failed / 19 succeeded
Started2021-05-20 14:39
Elapsed36m6s
Revision62fc2b6db1a301277c6e11da06cce1e03ba4c350
Refs 101587
job-versionv1.22.0-alpha.1.526+b8277ea7559b32
kubetest-version
revisionv1.22.0-alpha.1.526+b8277ea7559b32

No Test Failures!


Show 19 Passed Tests

Show 25 Skipped Tests

Error lines from build-log.txt

... skipping 346 lines ...
May 20 15:03:14.108: INFO: PersistentVolumeClaim pvc-rd4fr found but phase is Pending instead of Bound.
May 20 15:03:16.131: INFO: PersistentVolumeClaim pvc-rd4fr found and phase=Bound (30.363389072s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 20 15:03:16.208: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-2h7tb" in namespace "azurefile-5541" to be "Succeeded or Failed"
May 20 15:03:16.237: INFO: Pod "azurefile-volume-tester-2h7tb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.903477ms
May 20 15:03:18.263: INFO: Pod "azurefile-volume-tester-2h7tb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055333911s
May 20 15:03:20.289: INFO: Pod "azurefile-volume-tester-2h7tb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080606982s
May 20 15:03:22.315: INFO: Pod "azurefile-volume-tester-2h7tb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.106417005s
STEP: Saw pod success
May 20 15:03:22.315: INFO: Pod "azurefile-volume-tester-2h7tb" satisfied condition "Succeeded or Failed"
May 20 15:03:22.315: INFO: deleting Pod "azurefile-5541"/"azurefile-volume-tester-2h7tb"
May 20 15:03:22.402: INFO: Pod azurefile-volume-tester-2h7tb has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-2h7tb in namespace azurefile-5541
May 20 15:03:22.429: INFO: deleting PVC "azurefile-5541"/"pvc-rd4fr"
May 20 15:03:22.429: INFO: Deleting PersistentVolumeClaim "pvc-rd4fr"
... skipping 156 lines ...
May 20 15:05:24.609: INFO: PersistentVolumeClaim pvc-2l556 found but phase is Pending instead of Bound.
May 20 15:05:26.634: INFO: PersistentVolumeClaim pvc-2l556 found and phase=Bound (28.327712508s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 20 15:05:26.710: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-m7tgj" in namespace "azurefile-4147" to be "Error status code"
May 20 15:05:26.736: INFO: Pod "azurefile-volume-tester-m7tgj": Phase="Pending", Reason="", readiness=false. Elapsed: 26.571496ms
May 20 15:05:28.761: INFO: Pod "azurefile-volume-tester-m7tgj": Phase="Failed", Reason="", readiness=false. Elapsed: 2.051651791s
STEP: Saw pod failure
May 20 15:05:28.761: INFO: Pod "azurefile-volume-tester-m7tgj" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 20 15:05:28.840: INFO: deleting Pod "azurefile-4147"/"azurefile-volume-tester-m7tgj"
May 20 15:05:28.865: INFO: Pod azurefile-volume-tester-m7tgj has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-m7tgj in namespace azurefile-4147
May 20 15:05:28.896: INFO: deleting PVC "azurefile-4147"/"pvc-2l556"
... skipping 177 lines ...
May 20 15:07:37.143: INFO: PersistentVolumeClaim pvc-dj5rh found but phase is Pending instead of Bound.
May 20 15:07:39.168: INFO: PersistentVolumeClaim pvc-dj5rh found and phase=Bound (2.048699541s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 20 15:07:39.241: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-q28zj" in namespace "azurefile-5561" to be "Succeeded or Failed"
May 20 15:07:39.267: INFO: Pod "azurefile-volume-tester-q28zj": Phase="Pending", Reason="", readiness=false. Elapsed: 26.000181ms
May 20 15:07:41.298: INFO: Pod "azurefile-volume-tester-q28zj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.056556107s
STEP: Saw pod success
May 20 15:07:41.298: INFO: Pod "azurefile-volume-tester-q28zj" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
2021/05/20 15:08:11 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE
... skipping 659 lines ...
I0520 14:59:54.734811       1 azure_publicipclient.go:66] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=10, bucket=100
I0520 14:59:54.734950       1 azure.go:738] Setting up informers for Azure cloud provider
I0520 14:59:54.737396       1 reflector.go:219] Starting reflector *v1.Node (20h29m28.689056087s) from k8s.io/client-go/informers/factory.go:134
I0520 14:59:54.737645       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0520 14:59:54.737863       1 reflector.go:219] Starting reflector *v1.ServiceAccount (20h29m28.689056087s) from k8s.io/client-go/informers/factory.go:134
I0520 14:59:54.738197       1 reflector.go:219] Starting reflector *v1.Secret (20h29m28.689056087s) from k8s.io/client-go/informers/factory.go:134
W0520 14:59:54.796338       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0520 14:59:54.796484       1 controllermanager.go:562] Starting "serviceaccount"
I0520 14:59:54.806638       1 controllermanager.go:577] Started "serviceaccount"
I0520 14:59:54.806840       1 controllermanager.go:562] Starting "daemonset"
I0520 14:59:54.806815       1 serviceaccounts_controller.go:117] Starting service account controller
I0520 14:59:54.807043       1 shared_informer.go:240] Waiting for caches to sync for service account
I0520 14:59:54.814384       1 controllermanager.go:577] Started "daemonset"
... skipping 216 lines ...
I0520 14:59:58.945841       1 plugins.go:641] Loaded volume plugin "kubernetes.io/rbd"
I0520 14:59:58.945864       1 plugins.go:641] Loaded volume plugin "kubernetes.io/csi"
I0520 14:59:58.946073       1 controllermanager.go:577] Started "attachdetach"
I0520 14:59:58.946155       1 controllermanager.go:562] Starting "namespace"
I0520 14:59:58.946249       1 attach_detach_controller.go:328] Starting attach detach controller
I0520 14:59:58.946314       1 shared_informer.go:240] Waiting for caches to sync for attach detach
W0520 14:59:58.946410       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-66514468-0" does not exist
W0520 14:59:58.946476       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-agentpool1-66514468-0" does not exist
W0520 14:59:58.946532       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-agentpool1-66514468-1" does not exist
I0520 14:59:59.224761       1 controllermanager.go:577] Started "namespace"
I0520 14:59:59.224980       1 reflector.go:219] Starting reflector *v1.StatefulSet (16h57m39.335458427s) from k8s.io/client-go/informers/factory.go:134
I0520 14:59:59.225677       1 reflector.go:219] Starting reflector *v1.Deployment (12h44m6.422821376s) from k8s.io/client-go/informers/factory.go:134
I0520 14:59:59.225813       1 namespace_controller.go:200] Starting namespace controller
I0520 14:59:59.225822       1 shared_informer.go:240] Waiting for caches to sync for namespace
I0520 14:59:59.225931       1 reflector.go:219] Starting reflector *v1.ResourceQuota (5m0s) from k8s.io/client-go/informers/factory.go:134
... skipping 110 lines ...
I0520 15:00:01.639428       1 controller.go:737] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0520 15:00:04.444109       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0520 15:00:48.234745       1 replica_set.go:558] "Too few replicas" replicaSet="kube-system/coredns-787d7f6757" need=1 creating=1
I0520 15:00:48.235921       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d7f6757 to 1"
I0520 15:00:48.297898       1 event.go:291] "Event occurred" object="kube-system/coredns-autoscaler" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-87b67c5fd to 1"
I0520 15:00:48.311010       1 replica_set.go:558] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-87b67c5fd" need=1 creating=1
I0520 15:00:48.315408       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0520 15:00:48.327079       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again"
I0520 15:00:48.328539       1 event.go:291] "Event occurred" object="kube-system/coredns-787d7f6757" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d7f6757-dh4s6"
I0520 15:00:48.331096       1 event.go:291] "Event occurred" object="kube-system/coredns-autoscaler-87b67c5fd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-87b67c5fd-kk4mn"
I0520 15:00:49.269248       1 event.go:291] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-kzbw4"
I0520 15:00:49.280332       1 event.go:291] "Event occurred" object="kube-system/blobfuse-flexvol-installer" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: blobfuse-flexvol-installer-djct7"
I0520 15:00:49.292422       1 event.go:291] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-thpsq"
I0520 15:00:49.294864       1 event.go:291] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-cw27r"
I0520 15:00:49.295161       1 event.go:291] "Event occurred" object="kube-system/blobfuse-flexvol-installer" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: blobfuse-flexvol-installer-48lv8"
I0520 15:00:49.378083       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mjrvm"
I0520 15:00:49.396936       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xx9p6"
I0520 15:00:49.409337       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gw8qp"
I0520 15:00:50.260896       1 replica_set.go:558] "Too few replicas" replicaSet="kube-system/metrics-server-57cb856598" need=1 creating=1
I0520 15:00:50.261665       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57cb856598 to 1"
I0520 15:00:50.268391       1 event.go:291] "Event occurred" object="kube-system/metrics-server-57cb856598" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57cb856598-djhlp"
I0520 15:00:50.310436       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0520 15:00:50.337032       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0520 15:00:50.639943       1 event.go:291] "Event occurred" object="kube-system/csi-secrets-store" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-secrets-store-h8gvb"
I0520 15:00:50.655106       1 event.go:291] "Event occurred" object="kube-system/csi-secrets-store" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-secrets-store-pd7bx"
I0520 15:00:50.723245       1 event.go:291] "Event occurred" object="kube-system/csi-secrets-store-provider-azure" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-secrets-store-provider-azure-gmrfl"
I0520 15:00:50.730915       1 event.go:291] "Event occurred" object="kube-system/csi-secrets-store-provider-azure" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-secrets-store-provider-azure-5nscl"
E0520 15:00:59.482056       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0520 15:00:59.482123       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [secrets-store.csi.x-k8s.io/v1alpha1, Resource=secretproviderclasses secrets-store.csi.x-k8s.io/v1alpha1, Resource=secretproviderclasspodstatuses], removed: []
I0520 15:00:59.482314       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for secretproviderclasses.secrets-store.csi.x-k8s.io
I0520 15:00:59.482354       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io
I0520 15:00:59.482410       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0520 15:00:59.482578       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (13h51m0.678839798s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0520 15:00:59.482928       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (13h45m47.878306937s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0520 15:00:59.583439       1 shared_informer.go:247] Caches are synced for resource quota 
I0520 15:00:59.583465       1 resource_quota_controller.go:454] synced quota controller
W0520 15:00:59.919360       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0520 15:00:59.919509       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [secrets-store.csi.x-k8s.io/v1alpha1, Resource=secretproviderclasses secrets-store.csi.x-k8s.io/v1alpha1, Resource=secretproviderclasspodstatuses], removed: []
E0520 15:00:59.923022       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0520 15:00:59.937599       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0520 15:00:59.938490       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0520 15:00:59.938530       1 shared_informer.go:247] Caches are synced for garbage collector 
I0520 15:00:59.938543       1 garbagecollector.go:254] synced garbage collector
I0520 15:02:45.947403       1 azure_storageaccount.go:184] azure - no matching account found, begin to create a new account f904a13b19e21497bac2052 in resource group kubetest-nhjgubxi, location: centralus, accountType: Standard_LRS, accountKind: StorageV2, tags: map[created-by:azure]
E0520 15:02:48.563541       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3274/default: secrets "default-token-tbctt" is forbidden: unable to create new content in namespace azurefile-3274 because it is being terminated
E0520 15:02:49.363943       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-495/default: secrets "default-token-6vzrv" is forbidden: unable to create new content in namespace azurefile-495 because it is being terminated
E0520 15:02:50.073680       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9947/default: secrets "default-token-ndjng" is forbidden: unable to create new content in namespace azurefile-9947 because it is being terminated
I0520 15:02:51.317465       1 namespace_controller.go:185] Namespace has been deleted azurefile-8081
I0520 15:02:52.062425       1 namespace_controller.go:185] Namespace has been deleted azurefile-1318
I0520 15:02:52.955277       1 namespace_controller.go:185] Namespace has been deleted azurefile-694
I0520 15:02:53.712833       1 namespace_controller.go:185] Namespace has been deleted azurefile-3274
I0520 15:02:54.415460       1 namespace_controller.go:185] Namespace has been deleted azurefile-495
I0520 15:02:55.175069       1 namespace_controller.go:185] Namespace has been deleted azurefile-9947
... skipping 42 lines ...
I0520 15:04:53.639550       1 pv_controller.go:640] volume "pvc-a9940887-668c-4629-b9c1-174b2f46731e" is released and reclaim policy "Delete" will be executed
I0520 15:04:53.642533       1 pv_controller.go:879] volume "pvc-a9940887-668c-4629-b9c1-174b2f46731e" entered phase "Released"
I0520 15:04:53.646449       1 pv_controller.go:1341] isVolumeReleased[pvc-a9940887-668c-4629-b9c1-174b2f46731e]: volume is released
I0520 15:04:53.747770       1 pv_controller.go:1436] volume "pvc-a9940887-668c-4629-b9c1-174b2f46731e" deleted
I0520 15:04:53.755795       1 pv_controller_base.go:505] deletion of claim "azurefile-6413/pvc-4dqlq" was already processed
I0520 15:04:58.342136       1 azure_storageaccount.go:184] azure - no matching account found, begin to create a new account fb0fd561510144f469c7bf4 in resource group kubetest-nhjgubxi, location: centralus, accountType: Standard_GRS, accountKind: StorageV2, tags: map[created-by:azure]
E0520 15:05:02.636926       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-6413/default: secrets "default-token-mv4bt" is forbidden: unable to create new content in namespace azurefile-6413 because it is being terminated
I0520 15:05:07.677729       1 namespace_controller.go:185] Namespace has been deleted azurefile-6413
I0520 15:05:25.982599       1 pv_controller.go:1677] volume "pvc-c86a6924-1017-4bb5-adc0-926c863c3bf0" provisioned for claim "azurefile-4147/pvc-2l556"
I0520 15:05:25.983056       1 event.go:291] "Event occurred" object="azurefile-4147/pvc-2l556" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-c86a6924-1017-4bb5-adc0-926c863c3bf0 using kubernetes.io/azure-file"
I0520 15:05:25.985177       1 pv_controller.go:879] volume "pvc-c86a6924-1017-4bb5-adc0-926c863c3bf0" entered phase "Bound"
I0520 15:05:25.985201       1 pv_controller.go:982] volume "pvc-c86a6924-1017-4bb5-adc0-926c863c3bf0" bound to claim "azurefile-4147/pvc-2l556"
I0520 15:05:25.995378       1 pv_controller.go:823] claim "azurefile-4147/pvc-2l556" entered phase "Bound"
... skipping 8 lines ...
I0520 15:05:35.126930       1 pv_controller.go:879] volume "pvc-ecf60fd2-ecc7-4e61-9ba9-0d38fe9fd082" entered phase "Bound"
I0520 15:05:35.126957       1 pv_controller.go:982] volume "pvc-ecf60fd2-ecc7-4e61-9ba9-0d38fe9fd082" bound to claim "azurefile-1957/pvc-4ssxh"
I0520 15:05:35.134358       1 pv_controller.go:823] claim "azurefile-1957/pvc-4ssxh" entered phase "Bound"
I0520 15:05:36.984340       1 replica_set.go:558] "Too few replicas" replicaSet="azurefile-1957/azurefile-volume-tester-4p6j7-6d649958b5" need=1 creating=1
I0520 15:05:36.987228       1 event.go:291] "Event occurred" object="azurefile-1957/azurefile-volume-tester-4p6j7" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azurefile-volume-tester-4p6j7-6d649958b5 to 1"
I0520 15:05:36.991783       1 event.go:291] "Event occurred" object="azurefile-1957/azurefile-volume-tester-4p6j7-6d649958b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azurefile-volume-tester-4p6j7-6d649958b5-q6tsp"
I0520 15:05:37.002286       1 deployment_controller.go:490] "Error syncing deployment" deployment="azurefile-1957/azurefile-volume-tester-4p6j7" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-4p6j7\": the object has been modified; please apply your changes to the latest version and try again"
E0520 15:05:39.132818       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-4147/default: secrets "default-token-dbf2x" is forbidden: unable to create new content in namespace azurefile-4147 because it is being terminated
I0520 15:05:40.807238       1 replica_set.go:558] "Too few replicas" replicaSet="azurefile-1957/azurefile-volume-tester-4p6j7-6d649958b5" need=1 creating=1
I0520 15:05:40.816062       1 event.go:291] "Event occurred" object="azurefile-1957/azurefile-volume-tester-4p6j7-6d649958b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azurefile-volume-tester-4p6j7-6d649958b5-hhh4w"
I0520 15:05:44.248667       1 namespace_controller.go:185] Namespace has been deleted azurefile-4147
I0520 15:06:24.667464       1 deployment_controller.go:583] "Deployment has been deleted" deployment="azurefile-1957/azurefile-volume-tester-4p6j7"
I0520 15:06:24.667820       1 garbagecollector.go:471] "Processing object" object="azurefile-1957/azurefile-volume-tester-4p6j7-6d649958b5" objectUID=0dc8a7c1-805c-45a3-8a0a-20e2a5ea6632 kind="ReplicaSet" virtual=false
I0520 15:06:24.688980       1 garbagecollector.go:580] "Deleting object" object="azurefile-1957/azurefile-volume-tester-4p6j7-6d649958b5" objectUID=0dc8a7c1-805c-45a3-8a0a-20e2a5ea6632 kind="ReplicaSet" propagationPolicy=Background
... skipping 37 lines ...
I0520 15:08:12.122596       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azurefile-5561/pvc-dj5rh"
I0520 15:08:12.130749       1 pv_controller.go:640] volume "pvc-492184e8-ae78-4831-8a37-6be170fc6d62" is released and reclaim policy "Delete" will be executed
I0520 15:08:12.133994       1 pv_controller.go:879] volume "pvc-492184e8-ae78-4831-8a37-6be170fc6d62" entered phase "Released"
I0520 15:08:12.135913       1 pv_controller.go:1341] isVolumeReleased[pvc-492184e8-ae78-4831-8a37-6be170fc6d62]: volume is released
I0520 15:08:12.236934       1 pv_controller.go:1436] volume "pvc-492184e8-ae78-4831-8a37-6be170fc6d62" deleted
I0520 15:08:12.245862       1 pv_controller_base.go:505] deletion of claim "azurefile-5561/pvc-dj5rh" was already processed
E0520 15:08:24.825020       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-953/default: secrets "default-token-r8wft" is forbidden: unable to create new content in namespace azurefile-953 because it is being terminated
E0520 15:08:25.546609       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3033/default: secrets "default-token-lsn6p" is forbidden: unable to create new content in namespace azurefile-3033 because it is being terminated
E0520 15:08:26.404019       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-9336/default: secrets "default-token-jlbz8" is forbidden: unable to create new content in namespace azurefile-9336 because it is being terminated
I0520 15:08:27.451580       1 namespace_controller.go:185] Namespace has been deleted azurefile-5561
E0520 15:08:27.943008       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-1351/default: secrets "default-token-l8qcs" is forbidden: unable to create new content in namespace azurefile-1351 because it is being terminated
I0520 15:08:28.290768       1 namespace_controller.go:185] Namespace has been deleted azurefile-4376
E0520 15:08:28.758939       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-3410/default: secrets "default-token-fnpqz" is forbidden: unable to create new content in namespace azurefile-3410 because it is being terminated
I0520 15:08:29.086039       1 namespace_controller.go:185] Namespace has been deleted azurefile-1577
E0520 15:08:29.477529       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-8553/default: secrets "default-token-n6q9f" is forbidden: unable to create new content in namespace azurefile-8553 because it is being terminated
I0520 15:08:29.892706       1 namespace_controller.go:185] Namespace has been deleted azurefile-953
I0520 15:08:30.642144       1 namespace_controller.go:185] Namespace has been deleted azurefile-3033
I0520 15:08:31.467426       1 namespace_controller.go:185] Namespace has been deleted azurefile-9336
E0520 15:08:31.743982       1 tokens_controller.go:262] error synchronizing serviceaccount azurefile-493/default: secrets "default-token-2fcrv" is forbidden: unable to create new content in namespace azurefile-493 because it is being terminated
2021/05/20 15:08:31 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 31 Specs in 351.851 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 25 Skipped
--- PASS: TestE2E (351.85s)
PASS
ok  	sigs.k8s.io/azurefile-csi-driver/test/e2e	351.908s
2021/05/20 15:08:32 process.go:155: Step 'make e2e-test' finished in 7m32.90189425s
2021/05/20 15:08:32 aksengine_helpers.go:424: downloading /root/tmp220733510/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh
2021/05/20 15:08:32 util.go:68: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh
... skipping 43 lines ...
2021/05/20 15:09:06 process.go:153: Running: chmod +x /root/tmp220733510/win-ci-logs-collector.sh
2021/05/20 15:09:06 process.go:155: Step 'chmod +x /root/tmp220733510/win-ci-logs-collector.sh' finished in 1.326774ms
2021/05/20 15:09:06 process.go:153: Running: bash -c /root/tmp220733510/win-ci-logs-collector.sh kubetest-nhjgubxi.centralus.cloudapp.azure.com /root/tmp220733510 /root/.ssh/id_rsa
ssh key file /root/.ssh/id_rsa does not exist. Exiting.
2021/05/20 15:09:06 process.go:155: Step 'bash -c /root/tmp220733510/win-ci-logs-collector.sh kubetest-nhjgubxi.centralus.cloudapp.azure.com /root/tmp220733510 /root/.ssh/id_rsa' finished in 5.286697ms
2021/05/20 15:09:06 aksengine.go:1141: Deleting resource group: kubetest-nhjgubxi.
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-05-20T15:10:46Z"}