This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmauriciopoppe: Multi linux arch and multi windows distro builds
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2021-10-19 22:58
Elapsed14m19s
Revision5b5eb81c6edcc925ceac97831d73e8849f3e5c7b
Refs 273
job-versionv1.23.0-alpha.3.401+421cdec3a5d1a1
kubetest-version
revisionv1.23.0-alpha.3.401+421cdec3a5d1a1

Test Failures


kubetest 1.18s

error during bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh: exit status 2
				from junit_runner.xml

Filter through log files


Show 10 Passed Tests

Error lines from build-log.txt

... skipping 349 lines ...
Trying to find master named 'e2e-test-prow-master'
Looking for address 'e2e-test-prow-master-ip'
Using master: e2e-test-prow-master (external IP: 34.123.174.195; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

................Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-alpha_e2e-test-prow" set.
User "k8s-jkns-gci-gce-alpha_e2e-test-prow" set.
Context "k8s-jkns-gci-gce-alpha_e2e-test-prow" created.
Switched to context "k8s-jkns-gci-gce-alpha_e2e-test-prow".
... skipping 25 lines ...
e2e-test-prow-minion-group-kbqx   Ready                      <none>   19s   v1.23.0-alpha.3.401+421cdec3a5d1a1
e2e-test-prow-minion-group-nz9s   Ready                      <none>   19s   v1.23.0-alpha.3.401+421cdec3a5d1a1
e2e-test-prow-minion-group-r2n0   Ready                      <none>   19s   v1.23.0-alpha.3.401+421cdec3a5d1a1
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 104 lines ...

Specify --start=53105 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-test-prow-minion-group-kbqx
... skipping 6 lines ...

Specify --start=102963 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-test-prow-minion-group
NODE_NAMES=e2e-test-prow-minion-group-kbqx e2e-test-prow-minion-group-nz9s e2e-test-prow-minion-group-r2n0
Failures for e2e-test-prow-minion-group (if any):
2021/10/19 23:06:50 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m28.125693146s
2021/10/19 23:06:50 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-gci-gce-alpha
... skipping 43 lines ...
Property "users.k8s-jkns-gci-gce-alpha_e2e-test-prow-basic-auth" unset.
Property "contexts.k8s-jkns-gci-gce-alpha_e2e-test-prow" unset.
Cleared config for k8s-jkns-gci-gce-alpha_e2e-test-prow from /root/.kube/config
Done
2021/10/19 23:12:54 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m4.862364345s
2021/10/19 23:12:54 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/10/19 23:12:54 main.go:331: Something went wrong: encountered 1 errors: [error during bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh: exit status 2]
2021/10/19 23:12:54 e2e.go:82: err: exit status 1
exit status 1
make: *** [Makefile:57: e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...