This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmauriciopoppe: Multi linux arch and multi windows distro builds
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2021-10-20 19:02
Elapsed16m44s
Revisiond667eee4b4846469452eede4e8097e26cff582c4
Refs 273
job-versionv1.23.0-alpha.3.434+18104ecf1f5736
kubetest-version
revisionv1.23.0-alpha.3.434+18104ecf1f5736

Test Failures


kubetest 16s

error during bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh: exit status 2
				from junit_runner.xml

Filter through log files


Show 10 Passed Tests

Error lines from build-log.txt

... skipping 348 lines ...
Trying to find master named 'e2e-test-prow-master'
Looking for address 'e2e-test-prow-master-ip'
Using master: e2e-test-prow-master (external IP: 35.202.250.40; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.................Kubernetes cluster created.
Cluster "gce-up-g1-4-glat-up-mas_e2e-test-prow" set.
User "gce-up-g1-4-glat-up-mas_e2e-test-prow" set.
Context "gce-up-g1-4-glat-up-mas_e2e-test-prow" created.
Switched to context "gce-up-g1-4-glat-up-mas_e2e-test-prow".
... skipping 25 lines ...
e2e-test-prow-minion-group-0wf7   Ready                      <none>   20s   v1.23.0-alpha.3.434+18104ecf1f5736
e2e-test-prow-minion-group-hjmt   Ready                      <none>   21s   v1.23.0-alpha.3.434+18104ecf1f5736
e2e-test-prow-minion-group-wn3d   Ready                      <none>   20s   v1.23.0-alpha.3.434+18104ecf1f5736
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 212 lines ...
#6 10.24 Reading package lists...
#6 11.16 Reading package lists...
#6 CANCELED

#12 [builder 5/5] RUN make build-linux OS="linux" ARCH="amd64"
#12 0.517 /bin/sh: make: not found
#12 ERROR: executor failed running [/bin/sh -c make build-linux OS="${OS}" ARCH="${ARCH}"]: exit code: 127
------
 > [builder 5/5] RUN make build-linux OS="linux" ARCH="amd64":
#12 0.517 /bin/sh: make: not found
------
error: failed to solve: executor failed running [/bin/sh -c make build-linux OS="${OS}" ARCH="${ARCH}"]: exit code: 127
make[1]: *** [Makefile:69: build-container-linux-amd64] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner'
2021/10/20 19:10:46 process.go:155: Step 'bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh' finished in 16.497774435s
2021/10/20 19:10:46 e2e.go:565: Dumping logs locally to: /logs/artifacts
2021/10/20 19:10:46 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts
Checking for custom logdump instances, if any
----------------------------------------------------------------------------------------------------
... skipping 15 lines ...

Specify --start=53021 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-test-prow-minion-group-0wf7
... skipping 6 lines ...

Specify --start=103490 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-test-prow-minion-group
NODE_NAMES=e2e-test-prow-minion-group-0wf7 e2e-test-prow-minion-group-hjmt e2e-test-prow-minion-group-wn3d
Failures for e2e-test-prow-minion-group (if any):
2021/10/20 19:12:22 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m35.786872452s
2021/10/20 19:12:22 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: gce-up-g1-4-glat-up-mas
... skipping 43 lines ...
Property "users.gce-up-g1-4-glat-up-mas_e2e-test-prow-basic-auth" unset.
Property "contexts.gce-up-g1-4-glat-up-mas_e2e-test-prow" unset.
Cleared config for gce-up-g1-4-glat-up-mas_e2e-test-prow from /root/.kube/config
Done
2021/10/20 19:18:44 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m21.978055041s
2021/10/20 19:18:44 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/10/20 19:18:44 main.go:331: Something went wrong: encountered 1 errors: [error during bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh: exit status 2]
2021/10/20 19:18:44 e2e.go:82: err: exit status 1
exit status 1
make: *** [Makefile:56: e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...