This job view page is being replaced by Spyglass soon. Check out the new job view.
PRuthark: Check kube-proxy health on linux
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-07-16 17:47
Elapsed41m35s
Revisionc8629cea5d9d3133a34e7bd8a473c76fe2e2b0f6
Refs 575
job-versionv1.22.0-beta.2.36+8cda0d7f9c826d
kubetest-version
revisionv1.22.0-beta.2.36+8cda0d7f9c826d

Test Failures


kubetest Up 29m51s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 364 lines ...
Setting up libtinfo6:amd64 (6.1+20181013-2+deb10u2) ...
Selecting previously unselected package bash.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3900 files and directories currently installed.)
Preparing to unpack .../archives/bash_5.0-4_amd64.deb ...
Unpacking bash (5.0-4) ...
Setting up bash (5.0-4) ...
update-alternatives: error: alternative path /usr/share/man/man7/bash-builtins.7.gz doesn't exist

Selecting previously unselected package libuuid1:amd64.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3972 files and directories currently installed.)
Preparing to unpack .../libuuid1_2.33.1-0.1_amd64.deb ...
Unpacking libuuid1:amd64 (2.33.1-0.1) ...
Setting up libuuid1:amd64 (2.33.1-0.1) ...
Selecting previously unselected package libblkid1:amd64.
... skipping 623 lines ...
Trying to find master named 'e2e-72c39f79e0-429e8-master'
Looking for address 'e2e-72c39f79e0-429e8-master-ip'
Using master: e2e-72c39f79e0-429e8-master (external IP: 34.83.141.31; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.............Kubernetes cluster created.
Cluster "kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8" set.
User "kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8" set.
Context "kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8" created.
Switched to context "kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8".
... skipping 140 lines ...

Specify --start=54602 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-72c39f79e0-429e8-minion-group-jf2h
... skipping 7 lines ...
Specify --start=6084663 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-72c39f79e0-429e8-minion-group
NODE_NAMES=e2e-72c39f79e0-429e8-minion-group-jf2h e2e-72c39f79e0-429e8-minion-group-jfj2 e2e-72c39f79e0-429e8-minion-group-ntpw
Failures for e2e-72c39f79e0-429e8-minion-group (if any):
2021/07/16 18:24:36 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 2m20.040503396s
2021/07/16 18:24:36 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: kube-gce-upg-1-4-1-5-upg-mas
... skipping 41 lines ...
Property "users.kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8-basic-auth" unset.
Property "contexts.kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8" unset.
Cleared config for kube-gce-upg-1-4-1-5-upg-mas_e2e-72c39f79e0-429e8 from /workspace/.kube/config
Done
2021/07/16 18:29:12 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m36.291914713s
2021/07/16 18:29:12 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/07/16 18:29:12 main.go:327: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...