This job view page is being replaced by Spyglass soon. Check out the new job view.
PRn0rad: Support default storage class in helm chart
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2019-07-13 22:40
Elapsed20m25s
Revision97be9e7ddd9fe230afeedbee81adccd369fe9dab
Refs 125
job-versionv1.16.0-alpha.0.2241+87b744715ec695
revisionv1.16.0-alpha.0.2241+87b744715ec695

Test Failures


Up 8m53s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 269 lines ...
Trying to find master named 'e2e-test-prow-master'
Looking for address 'e2e-test-prow-master-ip'
Using master: e2e-test-prow-master (external IP: 35.239.248.253)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.....................................................................................................................................................Cluster failed to initialize within 300 seconds.
Last output from querying API server follows:
-----------------------------------------------------
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 35.239.248.253 port 443: Connection refused
-----------------------------------------------------
2019/07/13 22:51:34 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 8m53.96941052s
2019/07/13 22:51:34 e2e.go:522: Dumping logs locally to: /logs/artifacts
2019/07/13 22:51:34 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts
Checking for custom logdump instances, if any
Sourcing kube-util.sh
... skipping 12 lines ...
scp: /var/log/glbc.log*: No such file or directory
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/kube-addon-manager.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-test-prow-minion-group-fk91
... skipping 6 lines ...

Specify --start=42094 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-test-prow-minion-group
NODE_NAMES=e2e-test-prow-minion-group-07db e2e-test-prow-minion-group-3pk4 e2e-test-prow-minion-group-fk91
Failures for e2e-test-prow-minion-group
2019/07/13 22:52:40 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m6.173828161s
2019/07/13 22:52:40 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-e2e-gce-1-6
... skipping 12 lines ...
Bringing down cluster
Deleting Managed Instance Group...
...........................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/zones/us-central1-b/instanceGroupManagers/e2e-test-prow-minion-group].
done.
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/global/instanceTemplates/e2e-test-prow-minion-template].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/global/instanceTemplates/e2e-test-prow-windows-node-template].
{"message":"Internal Server Error"}Removing etcd replica, name: e2e-test-prow-master, port: 2379, result: 0
{"message":"Internal Server Error"}Removing etcd replica, name: e2e-test-prow-master, port: 4002, result: 0
Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/zones/us-central1-b/instances/e2e-test-prow-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/zones/us-central1-b/instances/e2e-test-prow-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/global/firewalls/e2e-test-prow-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/global/firewalls/e2e-test-prow-master-etcd].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/global/firewalls/e2e-test-prow-minion-all].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-1-6/regions/us-central1/addresses/e2e-test-prow-master-ip].
... skipping 16 lines ...
W0713 23:01:14.813450   12445 loader.go:223] Config not found: /root/.kube/config
Property "contexts.k8s-jkns-e2e-gce-1-6_e2e-test-prow" unset.
Cleared config for k8s-jkns-e2e-gce-1-6_e2e-test-prow from /root/.kube/config
Done
2019/07/13 23:01:14 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 8m34.176643494s
2019/07/13 23:01:14 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/07/13 23:01:14 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
2019/07/13 23:01:14 e2e.go:83: err: exit status 1
exit status 1
Makefile:54: recipe for target 'e2e' failed
make: *** [e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/07/13 23:01:14 Cleaning up Docker data root...
[Barnacle] 2019/07/13 23:01:14 Removing all containers.
... skipping 21 lines ...