This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmauriciopoppe: Multi linux arch and multi windows distro builds
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2021-10-19 19:55
Elapsed16m14s
Revision5ad88276f9d9944621d74ef1033c47a54c1883fd
Refs 273
job-versionv1.23.0-alpha.3.387+2dbdd9461d0e55
kubetest-version
revisionv1.23.0-alpha.3.387+2dbdd9461d0e55

Test Failures


kubetest 1.30s

error during bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh: exit status 2
				from junit_runner.xml

Filter through log files


Show 10 Passed Tests

Error lines from build-log.txt

... skipping 349 lines ...
Trying to find master named 'e2e-test-prow-master'
Looking for address 'e2e-test-prow-master-ip'
Using master: e2e-test-prow-master (external IP: 35.238.255.130; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.................Kubernetes cluster created.
Cluster "kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow" set.
User "kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow" set.
Context "kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow" created.
Switched to context "kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow".
... skipping 26 lines ...
e2e-test-prow-minion-group-03lb   Ready                      <none>   32s   v1.23.0-alpha.3.387+2dbdd9461d0e55
e2e-test-prow-minion-group-8p0h   Ready                      <none>   33s   v1.23.0-alpha.3.387+2dbdd9461d0e55
e2e-test-prow-minion-group-t39t   Ready                      <none>   32s   v1.23.0-alpha.3.387+2dbdd9461d0e55
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 104 lines ...

Specify --start=53241 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-test-prow-minion-group-03lb
... skipping 6 lines ...

Specify --start=103305 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-test-prow-minion-group
NODE_NAMES=e2e-test-prow-minion-group-03lb e2e-test-prow-minion-group-8p0h e2e-test-prow-minion-group-t39t
Failures for e2e-test-prow-minion-group (if any):
2021/10/19 20:04:32 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m50.154297972s
2021/10/19 20:04:32 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: kube-gce-upg-1-4-1-5-upg-mas
... skipping 43 lines ...
Property "users.kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow-basic-auth" unset.
Property "contexts.kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow" unset.
Cleared config for kube-gce-upg-1-4-1-5-upg-mas_e2e-test-prow from /root/.kube/config
Done
2021/10/19 20:11:25 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m52.682140409s
2021/10/19 20:11:25 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/10/19 20:11:25 main.go:331: Something went wrong: encountered 1 errors: [error during bash /home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/hack/run-e2e.sh: exit status 2]
2021/10/19 20:11:25 e2e.go:82: err: exit status 1
exit status 1
make: *** [Makefile:59: e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...