This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmcshooter: Add coverage.out to Makefile
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-07-26 17:03
Elapsed41m25s
Revision49526abf276db7526adedd1dc48603cd4dc8b548
Refs 595
job-versionv1.23.0-alpha.0.31+ee5df7cbcfffad
kubetest-version
revisionv1.23.0-alpha.0.31+ee5df7cbcfffad

Test Failures


kubetest Up 29m0s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 371 lines ...
Setting up libtinfo6:amd64 (6.1+20181013-2+deb10u2) ...
Selecting previously unselected package bash.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3900 files and directories currently installed.)
Preparing to unpack .../archives/bash_5.0-4_amd64.deb ...
Unpacking bash (5.0-4) ...
Setting up bash (5.0-4) ...
update-alternatives: error: alternative path /usr/share/man/man7/bash-builtins.7.gz doesn't exist

Selecting previously unselected package libuuid1:amd64.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3972 files and directories currently installed.)
Preparing to unpack .../libuuid1_2.33.1-0.1_amd64.deb ...
Unpacking libuuid1:amd64 (2.33.1-0.1) ...
Setting up libuuid1:amd64 (2.33.1-0.1) ...
Selecting previously unselected package libblkid1:amd64.
... skipping 621 lines ...
Trying to find master named 'e2e-91d951a2df-429e8-master'
Looking for address 'e2e-91d951a2df-429e8-master-ip'
Using master: e2e-91d951a2df-429e8-master (external IP: 104.196.238.245; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.................Kubernetes cluster created.
Cluster "kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8" set.
User "kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8" set.
Context "kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8" created.
Switched to context "kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8".
... skipping 141 lines ...

Specify --start=53413 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-91d951a2df-429e8-minion-group-2ng1
... skipping 8 lines ...
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-91d951a2df-429e8-minion-group
NODE_NAMES=e2e-91d951a2df-429e8-minion-group-2ng1 e2e-91d951a2df-429e8-minion-group-bp6m e2e-91d951a2df-429e8-minion-group-s1gk
Failures for e2e-91d951a2df-429e8-minion-group (if any):
2021/07/26 17:39:40 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 2m25.015618362s
2021/07/26 17:39:40 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: kube-gce-upg-lat-cluster
... skipping 41 lines ...
Property "users.kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8-basic-auth" unset.
Property "contexts.kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8" unset.
Cleared config for kube-gce-upg-lat-cluster_e2e-91d951a2df-429e8 from /workspace/.kube/config
Done
2021/07/26 17:44:37 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m57.145447148s
2021/07/26 17:44:37 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/07/26 17:44:37 main.go:327: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...