This job view page is being replaced by Spyglass soon. Check out the new job view.
PRuthark: Check kube-proxy health on linux
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-07-09 20:57
Elapsed39m17s
Revisionc8629cea5d9d3133a34e7bd8a473c76fe2e2b0f6
Refs 575
job-versionv1.22.0-beta.1.111+2423813207b885
kubetest-version
revisionv1.22.0-beta.1.111+2423813207b885

Test Failures


kubetest Up 28m51s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 363 lines ...
Setting up libtinfo6:amd64 (6.1+20181013-2+deb10u2) ...
Selecting previously unselected package bash.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3900 files and directories currently installed.)
Preparing to unpack .../archives/bash_5.0-4_amd64.deb ...
Unpacking bash (5.0-4) ...
Setting up bash (5.0-4) ...
update-alternatives: error: alternative path /usr/share/man/man7/bash-builtins.7.gz doesn't exist

Selecting previously unselected package libuuid1:amd64.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3972 files and directories currently installed.)
Preparing to unpack .../libuuid1_2.33.1-0.1_amd64.deb ...
Unpacking libuuid1:amd64 (2.33.1-0.1) ...
Setting up libuuid1:amd64 (2.33.1-0.1) ...
Selecting previously unselected package libblkid1:amd64.
... skipping 622 lines ...
Trying to find master named 'e2e-1de9f1a358-429e8-master'
Looking for address 'e2e-1de9f1a358-429e8-master-ip'
Using master: e2e-1de9f1a358-429e8-master (external IP: 35.247.57.100; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

..................Kubernetes cluster created.
Cluster "k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8" set.
User "k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8" set.
Context "k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8" created.
Switched to context "k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8".
... skipping 140 lines ...

Specify --start=54615 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-1de9f1a358-429e8-minion-group-62st
... skipping 7 lines ...
Specify --start=6056657 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-1de9f1a358-429e8-minion-group
NODE_NAMES=e2e-1de9f1a358-429e8-minion-group-62st e2e-1de9f1a358-429e8-minion-group-rf7d e2e-1de9f1a358-429e8-minion-group-sflf
Failures for e2e-1de9f1a358-429e8-minion-group (if any):
2021/07/09 21:32:03 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 2m0.227249728s
2021/07/09 21:32:03 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-e2e-gci-gce-1-6
... skipping 41 lines ...
Property "users.k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8-basic-auth" unset.
Property "contexts.k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8" unset.
Cleared config for k8s-jkns-e2e-gci-gce-1-6_e2e-1de9f1a358-429e8 from /workspace/.kube/config
Done
2021/07/09 21:36:22 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m19.396855809s
2021/07/09 21:36:22 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/07/09 21:36:22 main.go:327: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...