This job view page is being replaced by Spyglass soon. Check out the new job view.
PRalculquicondor: Move Snapshot to internal/cache
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2020-01-13 21:14
Elapsed5m10s
Revision
Buildergke-prow-default-pool-cf4891d4-p8rj
Refs master:e265afa2
87165:70691c87
pod93e32cac-3649-11ea-92f2-067c2728f436
infra-commit9146e32b5
job-versionv1.18.0-alpha.1.643+a2ea280df5ad9b
pod93e32cac-3649-11ea-92f2-067c2728f436
repok8s.io/kubernetes
repo-commita2ea280df5ad9bef9a6748c028500b69fbb5fda8
repos{u'k8s.io/kubernetes': u'master:e265afa2cdfb2b08c05aa3aeddaacdd26f22746e,87165:70691c871472767a7fc6a03c897585dfe911452d', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.18.0-alpha.1.643+a2ea280df5ad9b

Test Failures


Build 55s

error during make -C /go/src/k8s.io/kubernetes bazel-release: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 286 lines ...
W0113 21:18:11.988] Analyzing: target //build/release-tars:release-tars (1050 packages loaded, 4983 targets configured)
W0113 21:18:14.948] Analyzing: target //build/release-tars:release-tars (1657 packages loaded, 11255 targets configured)
W0113 21:18:18.382] Analyzing: target //build/release-tars:release-tars (1959 packages loaded, 15123 targets configured)
W0113 21:18:22.408] Analyzing: target //build/release-tars:release-tars (2781 packages loaded, 22856 targets configured)
W0113 21:18:28.733] Analyzing: target //build/release-tars:release-tars (3095 packages loaded, 27095 targets configured)
W0113 21:18:34.237] Analyzing: target //build/release-tars:release-tars (3096 packages loaded, 27095 targets configured)
W0113 21:18:35.855] ERROR: /go/src/k8s.io/kubernetes/pkg/scheduler/BUILD:3:1: no such package 'vendor/k8s.io/kubernetes/pkg/scheduler/nodeinfo/snapshot': BUILD file not found on package path and referenced by '//pkg/scheduler:go_default_library'
W0113 21:18:35.914] ERROR: Analysis of target '//build/release-tars:release-tars' failed; build aborted: no such package 'vendor/k8s.io/kubernetes/pkg/scheduler/nodeinfo/snapshot': BUILD file not found on package path
W0113 21:18:35.925] INFO: Elapsed time: 55.336s
W0113 21:18:35.926] INFO: 0 processes.
W0113 21:18:35.934] FAILED: Build did NOT complete successfully (3118 packages loaded, 27900 targets configured)
W0113 21:18:35.938] FAILED: Build did NOT complete successfully (3118 packages loaded, 27900 targets configured)
W0113 21:18:35.951] make: *** [Makefile:625: bazel-release] Error 1
W0113 21:18:35.953] 2020/01/13 21:18:35 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes bazel-release' finished in 55.501973268s
W0113 21:18:35.954] 2020/01/13 21:18:35 util.go:265: Flushing memory.
I0113 21:18:36.054] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0113 21:19:02.053] 2020/01/13 21:19:02 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0113 21:19:02.058] 2020/01/13 21:19:02 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0113 21:19:09.370] 2020/01/13 21:19:09 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 7.316356438s
W0113 21:19:16.373] 2020/01/13 21:19:16 main.go:316: Something went wrong: failed to acquire k8s binaries: error during make -C /go/src/k8s.io/kubernetes bazel-release: exit status 2
W0113 21:19:16.379] Traceback (most recent call last):
W0113 21:19:16.379]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0113 21:19:16.381]     main(parse_args())
W0113 21:19:16.381]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0113 21:19:16.381]     mode.start(runner_args)
W0113 21:19:16.381]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0113 21:19:16.382]     check_env(env, self.command, *args)
W0113 21:19:16.382]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0113 21:19:16.382]     subprocess.check_call(cmd, env=env)
W0113 21:19:16.382]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0113 21:19:16.386]     raise CalledProcessError(retcode, cmd)
W0113 21:19:16.389] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-87165-ac87c', '--gcp-network=e2e-87165-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project-type=scalability-presubmit-project', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m', '--logexporter-gcs-path=gs://kubernetes-jenkins/pr-logs/pull/87165/pull-kubernetes-kubemark-e2e-gce-big/1216830764398678016/artifacts')' returned non-zero exit status 1
E0113 21:19:16.407] Command failed
I0113 21:19:16.407] process 739 exited with code 1 after 1.8m
E0113 21:19:16.408] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0113 21:19:16.408] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0113 21:19:18.834] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0113 21:19:18.985] process 1085 exited with code 0 after 0.0m
I0113 21:19:18.986] Call:  gcloud config get-value account
I0113 21:19:19.806] process 1098 exited with code 0 after 0.0m
I0113 21:19:19.806] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0113 21:19:19.807] Upload result and artifacts...
I0113 21:19:19.807] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/87165/pull-kubernetes-kubemark-e2e-gce-big/1216830764398678016
I0113 21:19:19.807] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/87165/pull-kubernetes-kubemark-e2e-gce-big/1216830764398678016/artifacts
W0113 21:19:23.338] CommandException: One or more URLs matched no objects.
E0113 21:19:23.724] Command failed
I0113 21:19:23.724] process 1111 exited with code 1 after 0.1m
W0113 21:19:23.725] Remote dir gs://kubernetes-jenkins/pr-logs/pull/87165/pull-kubernetes-kubemark-e2e-gce-big/1216830764398678016/artifacts not exist yet
I0113 21:19:23.725] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/87165/pull-kubernetes-kubemark-e2e-gce-big/1216830764398678016/artifacts
I0113 21:19:28.116] process 1256 exited with code 0 after 0.1m
I0113 21:19:28.118] Call:  git rev-parse HEAD
I0113 21:19:28.130] process 1783 exited with code 0 after 0.0m
... skipping 21 lines ...