This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-07-14 18:00
Elapsed37m25s
Revision
Buildere4c4a679-039e-11ed-a16d-969e568a80d4
infra-commit6a828fde5
job-versionv1.25.0-alpha.2.278+e5f4f8d71b4847
kubetest-version
revisionv1.25.0-alpha.2.278+e5f4f8d71b4847

Test Failures


kubetest Up 28m33s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0714 18:00:57.860] process 198 exited with code 0 after 0.0m
I0714 18:00:57.861] Will upload results to gs://kubernetes-jenkins/logs using prow-build@k8s-infra-prow-build.iam.gserviceaccount.com
I0714 18:00:57.861] Root: /workspace
I0714 18:00:57.861] cd to /workspace
I0714 18:00:57.861] Configure environment...
I0714 18:00:57.862] Call:  git show -s --format=format:%ct HEAD
W0714 18:00:57.865] fatal: not a git repository (or any of the parent directories): .git
I0714 18:00:57.866] process 212 exited with code 128 after 0.0m
W0714 18:00:57.866] Unable to print commit date for HEAD
I0714 18:00:57.867] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0714 18:00:58.889] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0714 18:00:59.162] process 213 exited with code 0 after 0.0m
I0714 18:00:59.162] Call:  gcloud config get-value account
... skipping 368 lines ...
W0714 18:04:17.732] Trying to find master named 'bootstrap-e2e-master'
W0714 18:04:17.733] Looking for address 'bootstrap-e2e-master-ip'
W0714 18:04:19.019] Using master: bootstrap-e2e-master (external IP: 34.168.152.251; internal IP: (not set))
I0714 18:04:19.119] Waiting up to 300 seconds for cluster initialization.
I0714 18:04:19.120] 
I0714 18:04:19.120]   This will continually check to see if the API for kubernetes is reachable.
I0714 18:04:19.120]   This may time out if there was some uncaught error during start up.
I0714 18:04:19.120] 
I0714 18:05:22.055] ................Kubernetes cluster created.
I0714 18:05:22.195] Cluster "k8s-infra-e2e-boskos-052_bootstrap-e2e" set.
I0714 18:05:22.335] User "k8s-infra-e2e-boskos-052_bootstrap-e2e" set.
I0714 18:05:22.479] Context "k8s-infra-e2e-boskos-052_bootstrap-e2e" created.
I0714 18:05:22.619] Switched to context "k8s-infra-e2e-boskos-052_bootstrap-e2e".
... skipping 239 lines ...
W0714 18:32:00.701] 
W0714 18:32:00.701] Specify --start=53754 in the next get-serial-port-output invocation to get only the new output starting from here.
W0714 18:32:08.222] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0714 18:32:08.383] scp: /var/log/fluentd.log*: No such file or directory
W0714 18:32:08.383] scp: /var/log/kubelet.cov*: No such file or directory
W0714 18:32:08.383] scp: /var/log/startupscript.log*: No such file or directory
W0714 18:32:08.389] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0714 18:32:08.662] Dumping logs from nodes locally to '/workspace/_artifacts'
I0714 18:32:08.662] Detecting nodes in the cluster
W0714 18:33:30.682] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
W0714 18:33:31.409] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
W0714 18:33:32.793] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
W0714 18:33:35.514] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
... skipping 13 lines ...
I0714 18:33:49.427] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-3xlq
W0714 18:33:50.010] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0714 18:33:50.010] scp: /var/log/fluentd.log*: No such file or directory
W0714 18:33:50.010] scp: /var/log/node-problem-detector.log*: No such file or directory
W0714 18:33:50.010] scp: /var/log/kubelet.cov*: No such file or directory
W0714 18:33:50.011] scp: /var/log/startupscript.log*: No such file or directory
W0714 18:33:50.014] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0714 18:33:50.914] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0714 18:33:50.914] scp: /var/log/fluentd.log*: No such file or directory
W0714 18:33:50.915] scp: /var/log/node-problem-detector.log*: No such file or directory
W0714 18:33:50.915] scp: /var/log/kubelet.cov*: No such file or directory
W0714 18:33:50.915] scp: /var/log/startupscript.log*: No such file or directory
W0714 18:33:50.918] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0714 18:33:51.339] 
W0714 18:33:51.339] Specify --start=101862 in the next get-serial-port-output invocation to get only the new output starting from here.
W0714 18:33:52.118] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0714 18:33:52.118] scp: /var/log/fluentd.log*: No such file or directory
W0714 18:33:52.118] scp: /var/log/node-problem-detector.log*: No such file or directory
W0714 18:33:52.118] scp: /var/log/kubelet.cov*: No such file or directory
W0714 18:33:52.119] scp: /var/log/startupscript.log*: No such file or directory
W0714 18:33:52.121] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0714 18:33:53.725] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0714 18:33:53.726] scp: /var/log/fluentd.log*: No such file or directory
W0714 18:33:53.726] scp: /var/log/node-problem-detector.log*: No such file or directory
W0714 18:33:53.726] scp: /var/log/kubelet.cov*: No such file or directory
W0714 18:33:53.726] scp: /var/log/startupscript.log*: No such file or directory
W0714 18:33:53.730] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0714 18:33:59.291] INSTANCE_GROUPS=bootstrap-e2e-minion-group
W0714 18:33:59.291] NODE_NAMES=bootstrap-e2e-minion-group-3xlq bootstrap-e2e-minion-group-fttm bootstrap-e2e-minion-group-rnxq bootstrap-e2e-minion-group-wz5f
I0714 18:34:00.913] Failures for bootstrap-e2e-minion-group (if any):
W0714 18:34:02.451] 2022/07/14 18:34:02 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 3m3.332549463s
W0714 18:34:02.452] 2022/07/14 18:34:02 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0714 18:34:02.541] Project: k8s-infra-e2e-boskos-052
... skipping 43 lines ...
I0714 18:37:58.675] Property "users.k8s-infra-e2e-boskos-052_bootstrap-e2e-basic-auth" unset.
I0714 18:37:58.871] Property "contexts.k8s-infra-e2e-boskos-052_bootstrap-e2e" unset.
I0714 18:37:58.875] Cleared config for k8s-infra-e2e-boskos-052_bootstrap-e2e from /workspace/.kube/config
I0714 18:37:58.875] Done
W0714 18:37:58.909] 2022/07/14 18:37:58 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 3m56.425199335s
W0714 18:37:58.910] 2022/07/14 18:37:58 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0714 18:37:58.910] 2022/07/14 18:37:58 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0714 18:37:58.910] Traceback (most recent call last):
W0714 18:37:58.910]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0714 18:37:58.910]     main(parse_args())
W0714 18:37:58.910]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0714 18:37:58.911]     mode.start(runner_args)
W0714 18:37:58.911]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0714 18:37:58.911]     check_env(env, self.command, *args)
W0714 18:37:58.911]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0714 18:37:58.911]     subprocess.check_call(cmd, env=env)
W0714 18:37:58.911]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0714 18:37:58.911]     raise CalledProcessError(retcode, cmd)
W0714 18:37:58.912] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--extract=ci/latest-fast', '--extract-ci-bucket=k8s-release-dev', '--gcp-master-image=gci', '--gcp-node-image=gci', '--gcp-nodes=4', '--gcp-zone=us-west1-b', '--ginkgo-parallel=30', '--test_args=--ginkgo.skip=\\[Driver:.gcepd\\]|\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\] --minStartupPods=8', '--timeout=80m')' returned non-zero exit status 1
E0714 18:37:58.912] Command failed
I0714 18:37:58.912] process 413 exited with code 1 after 36.9m
E0714 18:37:58.912] FAIL: canary-e2e-gce-cloud-provider-disabled
I0714 18:37:58.913] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0714 18:37:59.868] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0714 18:38:00.075] process 18349 exited with code 0 after 0.0m
I0714 18:38:00.076] Call:  gcloud config get-value account
I0714 18:38:01.023] process 18363 exited with code 0 after 0.0m
I0714 18:38:01.023] Will upload results to gs://kubernetes-jenkins/logs using prow-build@k8s-infra-prow-build.iam.gserviceaccount.com
I0714 18:38:01.023] Upload result and artifacts...
I0714 18:38:01.023] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/canary-e2e-gce-cloud-provider-disabled/1547642286769180672
I0714 18:38:01.024] Call:  gsutil ls gs://kubernetes-jenkins/logs/canary-e2e-gce-cloud-provider-disabled/1547642286769180672/artifacts
W0714 18:38:02.579] CommandException: One or more URLs matched no objects.
E0714 18:38:03.003] Command failed
I0714 18:38:03.003] process 18377 exited with code 1 after 0.0m
W0714 18:38:03.003] Remote dir gs://kubernetes-jenkins/logs/canary-e2e-gce-cloud-provider-disabled/1547642286769180672/artifacts not exist yet
I0714 18:38:03.003] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/canary-e2e-gce-cloud-provider-disabled/1547642286769180672/artifacts
I0714 18:38:14.072] process 18517 exited with code 0 after 0.2m
I0714 18:38:14.073] Call:  git rev-parse HEAD
W0714 18:38:14.077] fatal: not a git repository (or any of the parent directories): .git
E0714 18:38:14.078] Command failed
I0714 18:38:14.078] process 19172 exited with code 128 after 0.0m
I0714 18:38:14.078] Call:  git rev-parse HEAD
I0714 18:38:14.083] process 19173 exited with code 0 after 0.0m
I0714 18:38:14.083] Call:  gsutil stat gs://kubernetes-jenkins/logs/canary-e2e-gce-cloud-provider-disabled/jobResultsCache.json
I0714 18:38:16.084] process 19174 exited with code 0 after 0.0m
I0714 18:38:16.085] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/canary-e2e-gce-cloud-provider-disabled/jobResultsCache.json#1657219078438230'
... skipping 8 lines ...