This job view page is being replaced by Spyglass soon. Check out the new job view.
PRAllenZMC: fix defer in loop, maybe resource leak
ResultABORTED
Tests 0 failed / 80 succeeded
Started2022-05-13 14:54
Elapsed1h2m
Revisionbedd0839a16a9b423324d91d4750ecb8a7a1ce94
Refs 109830
job-versionv1.25.0-alpha.0.484+73ecbb1f764192
kubetest-version
revisionv1.25.0-alpha.0.484+73ecbb1f764192

No Test Failures!


Show 80 Passed Tests

Error lines from build-log.txt

... skipping 718 lines ...
Looking for address 'e2e-109830-95a39-master-ip'
Looking for address 'e2e-109830-95a39-master-internal-ip'
Using master: e2e-109830-95a39-master (external IP: 34.148.173.87; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-25_e2e-109830-95a39" set.
User "k8s-infra-e2e-boskos-scale-25_e2e-109830-95a39" set.
Context "k8s-infra-e2e-boskos-scale-25_e2e-109830-95a39" created.
Switched to context "k8s-infra-e2e-boskos-scale-25_e2e-109830-95a39".
... skipping 227 lines ...
e2e-109830-95a39-minion-group-zsc9   Ready                         <none>   62s   v1.25.0-alpha.0.484+73ecbb1f764192
e2e-109830-95a39-minion-group-zx5l   Ready                         <none>   62s   v1.25.0-alpha.0.484+73ecbb1f764192
e2e-109830-95a39-minion-heapster     Ready                         <none>   74s   v1.25.0-alpha.0.484+73ecbb1f764192
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.148.173.87
GLBCDefaultBackend is running at https://34.148.173.87/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.148.173.87/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.148.173.87/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 62 lines ...
42f561e280bbd6e6a80bf5220914979b298ff5b1 - Wed May 11 08:58:51 2022 (Merge pull request #2063 from yangjunmyfm192085/metrics-server-add-provider)
dbf7065c59ba23649cf3d32270f876d22c87f9a9 - Wed May 11 05:01:05 2022 (metrics-server support kind and local provider)
d51ecd9eae447b647a6b2292ebecb11a01289647 - Tue May 10 16:01:43 2022 (Merge pull request #2061 from oprinmarius/patch/aks-provider)
1746a9fc3d7ca818884af7d2991c0e2e4a781c04 - Tue May 10 14:55:05 2022 (Enable Prometehus server for AKS provider)
45295fd212a7a4ed72860e4740ca5a48702e20e5 - Mon May 9 19:32:34 2022 (Merge pull request #2057 from deads2k/removed-api)
746a3ca8eff9cae30b587ed46150ae585d7973a0 - Mon May 9 15:27:19 2022 (Merge pull request #2060 from yangjunmyfm192085/modifyprint)
6c19fcc93296bce8cbecee734b1295f24ad22ec3 - Mon May 9 14:31:35 2022 (clusterloader2: error while calling prometheus api, part of the content shows wrong format)
7d0a48be78ed5d7e63798006eaeb8582ec8050e9 - Thu May 5 16:58:45 2022 (stop using PSP API which is removed in 1.25)
02574acce8edda0c6c490d9404972820ceb697cc - Thu May 5 15:42:20 2022 (Merge pull request #2050 from tosi3k/container-problems-huge-service)
1f00a7787f9f47bad251d4e791a0117ad6fe7e1d - Tue Apr 26 07:00:52 2022 (Merge pull request #2052 from wojtek-t/unify_pod_sizes_5)
COMMAND: /home/prow/go/src/k8s.io/perf-tests/clusterloader2 && ./run-e2e.sh --nodes=100 --provider=gce --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=pull-kubernetes-e2e-gce-100-performance-1525126171174375424 --experimental-prometheus-snapshot-to-report-dir=true --prometheus-scrape-kubelets=true --prometheus-scrape-node-exporter --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/huge-service/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/load_throughput.yaml
namespace/gce-pd-csi-driver created
serviceaccount/csi-gce-pd-controller-sa created
... skipping 2940 lines ...
Specify --start=60661 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1525126171174375424' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1525126171174375424/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 385 lines ...