This job view page is being replaced by Spyglass soon. Check out the new job view.
PRuthark: Check kube-proxy health on linux
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-06-30 22:05
Elapsed39m25s
Revisionc8629cea5d9d3133a34e7bd8a473c76fe2e2b0f6
Refs 575
job-versionv1.22.0-beta.0.298+9c360b6185eb4b
kubetest-version
revisionv1.22.0-beta.0.298+9c360b6185eb4b

Test Failures


kubetest Up 28m48s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 362 lines ...
Setting up libtinfo6:amd64 (6.1+20181013-2+deb10u2) ...
Selecting previously unselected package bash.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3900 files and directories currently installed.)
Preparing to unpack .../archives/bash_5.0-4_amd64.deb ...
Unpacking bash (5.0-4) ...
Setting up bash (5.0-4) ...
update-alternatives: error: alternative path /usr/share/man/man7/bash-builtins.7.gz doesn't exist

Selecting previously unselected package libuuid1:amd64.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3972 files and directories currently installed.)
Preparing to unpack .../libuuid1_2.33.1-0.1_amd64.deb ...
Unpacking libuuid1:amd64 (2.33.1-0.1) ...
Setting up libuuid1:amd64 (2.33.1-0.1) ...
Selecting previously unselected package libblkid1:amd64.
... skipping 623 lines ...
Trying to find master named 'e2e-220b322bce-429e8-master'
Looking for address 'e2e-220b322bce-429e8-master-ip'
Using master: e2e-220b322bce-429e8-master (external IP: 34.82.7.139; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.................Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8" set.
User "k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8" set.
Context "k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8" created.
Switched to context "k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8".
... skipping 141 lines ...

Specify --start=54575 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-220b322bce-429e8-minion-group-mz59
... skipping 7 lines ...
Specify --start=4151602 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-220b322bce-429e8-minion-group
NODE_NAMES=e2e-220b322bce-429e8-minion-group-0fj3 e2e-220b322bce-429e8-minion-group-g6m8 e2e-220b322bce-429e8-minion-group-mz59
Failures for e2e-220b322bce-429e8-minion-group (if any):
2021/06/30 22:40:34 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m41.979073829s
2021/06/30 22:40:34 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-gci-gce-alpha
... skipping 41 lines ...
Property "users.k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8-basic-auth" unset.
Property "contexts.k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8" unset.
Cleared config for k8s-jkns-gci-gce-alpha_e2e-220b322bce-429e8 from /workspace/.kube/config
Done
2021/06/30 22:44:56 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m21.190322752s
2021/06/30 22:44:56 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/06/30 22:44:56 main.go:327: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...