This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2020-05-18 17:02
Elapsed1h33m
Revision
Buildergke-scalability-build-cpu16-disk1000-d2de3eff-w3sm
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/49a51f22-189b-4138-8efc-a1d0223e0da7/targets/test'}}
pod47f4ed69-9929-11ea-aa9b-c69591b579a5
resultstorehttps://source.cloud.google.com/results/invocations/49a51f22-189b-4138-8efc-a1d0223e0da7/targets/test
infra-commite41d9b64e
job-versionv1.19.0-alpha.3.391+a3d532a3f73ceb
pod47f4ed69-9929-11ea-aa9b-c69591b579a5
repok8s.io/kubernetes
repo-commita3d532a3f73cebd23e7037f6688a93f98b22bf70
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.19.0-alpha.3.391+a3d532a3f73ceb

Test Failures


Up 40m57s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 607 lines ...
W0518 17:16:26.218] Trying to find master named 'gce-scale-cluster-master'
W0518 17:16:26.218] Looking for address 'gce-scale-cluster-master-ip'
W0518 17:16:27.028] Looking for address 'gce-scale-cluster-master-internal-ip'
I0518 17:16:27.868] Waiting up to 300 seconds for cluster initialization.
I0518 17:16:27.868] 
I0518 17:16:27.869]   This will continually check to see if the API for kubernetes is reachable.
I0518 17:16:27.869]   This may time out if there was some uncaught error during start up.
I0518 17:16:27.869] 
W0518 17:16:27.969] Using master: gce-scale-cluster-master (external IP: 34.73.65.188; internal IP: 10.40.0.2)
I0518 17:16:28.202] Kubernetes cluster created.
W0518 17:16:28.303] Using user provided NODE_IP_RANGE: 10.40.0.0/19
W0518 17:16:28.390] Using user provided NODE_IP_RANGE: 10.40.0.0/19
I0518 17:16:28.491] Cluster "kubernetes-scale_gce-scale-cluster" set.
... skipping 1836 lines ...
W0518 17:46:52.320] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0518 17:46:52.389] scp: /var/log/konnectivity-server.log*: No such file or directory
W0518 17:46:52.390] scp: /var/log/fluentd.log*: No such file or directory
W0518 17:46:52.390] scp: /var/log/kubelet.cov*: No such file or directory
W0518 17:46:52.390] scp: /var/log/cl2-**: No such file or directory
W0518 17:46:52.390] scp: /var/log/startupscript.log*: No such file or directory
W0518 17:46:52.394] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0518 17:46:52.495] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1262428312236462085/artifacts' using logexporter
I0518 17:46:52.495] Detecting nodes in the cluster
W0518 17:47:05.425] Using user provided NODE_IP_RANGE: 10.40.0.0/19
I0518 17:47:05.894] namespace/logexporter created
I0518 17:47:05.932] secret/google-service-account created
I0518 17:47:05.969] daemonset.apps/logexporter created
... skipping 4554 lines ...
I0518 18:18:15.769] Logexporter didn't succeed on node gce-scale-cluster-minion-group-1-0jx5. Queuing it for logdump through SSH.
I0518 18:18:15.769] Logexporter didn't succeed on node gce-scale-cluster-minion-group-1-0m8x. Queuing it for logdump through SSH.
I0518 18:18:15.769] Logexporter didn't succeed on node gce-scale-cluster-minion-group-1-0pl5. Queuing it for logdump through SSH.
I0518 18:18:15.769] Logexporter didn't succeed on node gce-scale-cluster-minion-group-1-0s7h. Queuing it for logdump through SSH.
I0518 18:18:15.770] Logexporter didn't succeed on node gce-scale-cluster-minion-group-1-0v76. Queuing it for logdump through SSH.
I0518 18:18:15.770] Shutting down test cluster in background.
W0518 18:18:15.870] ./cluster/log-dump/log-dump.sh: line 583: echo: write error: Resource temporarily unavailable
W0518 18:18:15.871] 2020/05/18 18:18:15 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1262428312236462085/artifacts' finished in 32m25.456045766s
W0518 18:18:15.871] 2020/05/18 18:18:15 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0518 18:18:15.871] Project: kubernetes-scale
W0518 18:18:15.871] Network Project: kubernetes-scale
W0518 18:18:15.871] Zone: us-east1-b
W0518 18:18:17.744] Using user provided NODE_IP_RANGE: 10.40.0.0/19
... skipping 57 lines ...
I0518 18:35:03.843] Property "users.kubernetes-scale_gce-scale-cluster-basic-auth" unset.
I0518 18:35:03.857] Property "contexts.kubernetes-scale_gce-scale-cluster" unset.
I0518 18:35:03.861] Cleared config for kubernetes-scale_gce-scale-cluster from /workspace/.kube/config
I0518 18:35:03.861] Done
W0518 18:35:03.879] 2020/05/18 18:35:03 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 16m48.16912803s
W0518 18:35:03.879] 2020/05/18 18:35:03 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0518 18:35:03.879] 2020/05/18 18:35:03 main.go:312: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0518 18:35:03.879] Traceback (most recent call last):
W0518 18:35:03.880]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 720, in <module>
W0518 18:35:03.880]     main(parse_args())
W0518 18:35:03.880]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 570, in main
W0518 18:35:03.880]     mode.start(runner_args)
W0518 18:35:03.880]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0518 18:35:03.880]     check_env(env, self.command, *args)
W0518 18:35:03.880]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0518 18:35:03.880]     subprocess.check_call(cmd, env=env)
W0518 18:35:03.880]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0518 18:35:03.881]     raise CalledProcessError(retcode, cmd)
W0518 18:35:03.882] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=gce-scale-cluster', '--gcp-network=gce-scale-cluster', '--extract=ci/latest', '--gcp-nodes=5000', '--gcp-project=kubernetes-scale', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gce-scale-performance-1262428312236462085', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testconfig=testing/access-tokens/config.yaml', '--test-cmd-args=--testoverrides=./testing/density/5000_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_gce_container_restarts.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1050m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1262428312236462085/artifacts')' returned non-zero exit status 1
E0518 18:35:03.882] Command failed
I0518 18:35:03.882] process 339 exited with code 1 after 91.2m
E0518 18:35:03.882] FAIL: ci-kubernetes-e2e-gce-scale-performance
I0518 18:35:03.883] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0518 18:35:04.442] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0518 18:35:04.505] process 132510 exited with code 0 after 0.0m
I0518 18:35:04.505] Call:  gcloud config get-value account
I0518 18:35:04.975] process 132524 exited with code 0 after 0.0m
I0518 18:35:04.975] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...