This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2022-09-30 14:40
Elapsed15m59s
Revision
Buildercbd960a7-40cd-11ed-80f0-de992a81f90e
infra-commit6837fbe45
job-versionv1.26.0-alpha.1.203+42458952616406
kubetest-versionv20220928-cd48f52a16
repogithub.com/containerd/containerd
repo-commit1cc38f8df752d765eb0c0ca21784e55be726e94f
repos{u'github.com/containerd/containerd': u'main'}
revisionv1.26.0-alpha.1.203+42458952616406

Test Failures


kubetest IsUp 0.25s

error during ./hack/e2e-internal/e2e-status.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 406 lines ...
W0930 14:44:26.847] NODE_NAMES=bootstrap-e2e-minion-group-mh3x bootstrap-e2e-minion-group-n5c6 bootstrap-e2e-minion-group-rqf8
W0930 14:44:26.847] Trying to find master named 'bootstrap-e2e-master'
W0930 14:44:26.847] Looking for address 'bootstrap-e2e-master-ip'
I0930 14:44:28.154] Waiting up to 300 seconds for cluster initialization.
I0930 14:44:28.154] 
I0930 14:44:28.154]   This will continually check to see if the API for kubernetes is reachable.
I0930 14:44:28.155]   This may time out if there was some uncaught error during start up.
I0930 14:44:28.155] 
W0930 14:44:28.255] Using master: bootstrap-e2e-master (external IP: 34.83.14.188; internal IP: (not set))
I0930 14:45:02.751] .........Kubernetes cluster created.
I0930 14:45:02.901] Cluster "gce-cvm-upg-1-4-1-5-ctl-skew_bootstrap-e2e" set.
I0930 14:45:03.050] User "gce-cvm-upg-1-4-1-5-ctl-skew_bootstrap-e2e" set.
I0930 14:45:03.195] Context "gce-cvm-upg-1-4-1-5-ctl-skew_bootstrap-e2e" created.
... skipping 25 lines ...
I0930 14:45:41.545] bootstrap-e2e-minion-group-rqf8   Ready                      <none>   14s   v1.26.0-alpha.1.203+42458952616406
W0930 14:45:41.833] Warning: v1 ComponentStatus is deprecated in v1.19+
I0930 14:45:41.934] Validate output:
W0930 14:45:42.176] Warning: v1 ComponentStatus is deprecated in v1.19+
W0930 14:45:42.181] Done, listing cluster services:
W0930 14:45:42.182] 
I0930 14:45:42.282] NAME                 STATUS    MESSAGE                         ERROR
I0930 14:45:42.283] etcd-1               Healthy   {"health":"true","reason":""}   
I0930 14:45:42.283] etcd-0               Healthy   {"health":"true","reason":""}   
I0930 14:45:42.283] controller-manager   Healthy   ok                              
I0930 14:45:42.284] scheduler            Healthy   ok                              
I0930 14:45:42.284] Cluster validation succeeded
I0930 14:45:42.458] Kubernetes control plane is running at https://34.83.14.188
... skipping 69 lines ...
W0930 14:46:11.195] Listed 0 items.
W0930 14:46:12.347] 2022/09/30 14:46:12 process.go:155: Step './cluster/gce/list-resources.sh' finished in 13.804842556s
W0930 14:46:12.348] 2022/09/30 14:46:12 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0930 14:46:12.409] Project: gce-cvm-upg-1-4-1-5-ctl-skew
W0930 14:46:12.409] Network Project: gce-cvm-upg-1-4-1-5-ctl-skew
W0930 14:46:12.409] Zone: us-west1-b
W0930 14:46:12.594] error: couldn't read version from server: Get "https://34.83.14.188/version?timeout=32s": dial tcp 34.83.14.188:443: connect: connection refused
W0930 14:46:12.598] 2022/09/30 14:46:12 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 250.835282ms
W0930 14:46:12.599] 2022/09/30 14:46:12 e2e.go:568: Dumping logs locally to: /workspace/_artifacts
W0930 14:46:12.599] 2022/09/30 14:46:12 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0930 14:46:12.675] Trying to find master named 'bootstrap-e2e-master'
W0930 14:46:12.676] Looking for address 'bootstrap-e2e-master-ip'
I0930 14:46:12.776] Checking for custom logdump instances, if any
... skipping 14 lines ...
W0930 14:47:13.298] 
W0930 14:47:13.298] Specify --start=68856 in the next get-serial-port-output invocation to get only the new output starting from here.
W0930 14:47:16.527] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0930 14:47:16.685] scp: /var/log/fluentd.log*: No such file or directory
W0930 14:47:16.685] scp: /var/log/kubelet.cov*: No such file or directory
W0930 14:47:16.685] scp: /var/log/startupscript.log*: No such file or directory
W0930 14:47:16.689] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0930 14:47:16.963] Dumping logs from nodes locally to '/workspace/_artifacts'
I0930 14:47:16.964] Detecting nodes in the cluster
I0930 14:48:20.494] Changing logfiles to be world-readable for download
I0930 14:48:20.521] Changing logfiles to be world-readable for download
I0930 14:48:20.532] Changing logfiles to be world-readable for download
I0930 14:48:24.741] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-rqf8
... skipping 6 lines ...
W0930 14:48:26.069] 
W0930 14:48:26.070] Specify --start=72048 in the next get-serial-port-output invocation to get only the new output starting from here.
W0930 14:48:28.124] scp: /var/log/fluentd.log*: No such file or directory
W0930 14:48:28.124] scp: /var/log/node-problem-detector.log*: No such file or directory
W0930 14:48:28.124] scp: /var/log/kubelet.cov*: No such file or directory
W0930 14:48:28.124] scp: /var/log/startupscript.log*: No such file or directory
W0930 14:48:28.129] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0930 14:48:28.237] scp: /var/log/fluentd.log*: No such file or directory
W0930 14:48:28.237] scp: /var/log/node-problem-detector.log*: No such file or directory
W0930 14:48:28.237] scp: /var/log/kubelet.cov*: No such file or directory
W0930 14:48:28.238] scp: /var/log/startupscript.log*: No such file or directory
W0930 14:48:28.241] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0930 14:48:28.393] scp: /var/log/fluentd.log*: No such file or directory
W0930 14:48:28.394] scp: /var/log/node-problem-detector.log*: No such file or directory
W0930 14:48:28.394] scp: /var/log/kubelet.cov*: No such file or directory
W0930 14:48:28.394] scp: /var/log/startupscript.log*: No such file or directory
W0930 14:48:28.399] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0930 14:48:33.068] INSTANCE_GROUPS=bootstrap-e2e-minion-group
W0930 14:48:33.068] NODE_NAMES=bootstrap-e2e-minion-group-mh3x bootstrap-e2e-minion-group-n5c6 bootstrap-e2e-minion-group-rqf8
I0930 14:48:34.443] Failures for bootstrap-e2e-minion-group (if any):
W0930 14:48:35.700] 2022/09/30 14:48:35 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 2m23.10159644s
W0930 14:48:35.700] 2022/09/30 14:48:35 e2e.go:472: Listing resources...
W0930 14:48:35.700] 2022/09/30 14:48:35 process.go:153: Running: ./cluster/gce/list-resources.sh
... skipping 78 lines ...
W0930 14:56:05.267] Listed 0 items.
W0930 14:56:06.636] Listed 0 items.
W0930 14:56:07.931] 2022/09/30 14:56:07 process.go:155: Step './cluster/gce/list-resources.sh' finished in 13.281158954s
W0930 14:56:07.931] 2022/09/30 14:56:07 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt
W0930 14:56:07.933] 2022/09/30 14:56:07 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt' finished in 1.854466ms
W0930 14:56:07.934] 2022/09/30 14:56:07 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0930 14:56:07.939] 2022/09/30 14:56:07 main.go:331: Something went wrong: encountered 1 errors: [error during ./hack/e2e-internal/e2e-status.sh: exit status 1]
W0930 14:56:07.944] Traceback (most recent call last):
W0930 14:56:07.944]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0930 14:56:07.944]     main(parse_args())
W0930 14:56:07.944]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0930 14:56:07.944]     mode.start(runner_args)
W0930 14:56:07.945]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0930 14:56:07.945]     check_env(env, self.command, *args)
W0930 14:56:07.945]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0930 14:56:07.945]     subprocess.check_call(cmd, env=env)
W0930 14:56:07.945]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0930 14:56:07.945]     raise CalledProcessError(retcode, cmd)
W0930 14:56:07.945] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-zone=us-west1-b', '--test_args=--ginkgo.focus=\\[Feature:Reboot\\] --minStartupPods=8', '--timeout=180m')' returned non-zero exit status 1
E0930 14:56:07.951] Command failed
I0930 14:56:07.951] process 288 exited with code 1 after 15.6m
E0930 14:56:07.951] FAIL: ci-cri-containerd-e2e-cos-gce-reboot
I0930 14:56:07.951] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0930 14:56:08.727] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0930 14:56:08.904] process 13075 exited with code 0 after 0.0m
I0930 14:56:08.905] Call:  gcloud config get-value account
I0930 14:56:09.709] process 13089 exited with code 0 after 0.0m
I0930 14:56:09.709] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0930 14:56:09.709] Upload result and artifacts...
I0930 14:56:09.709] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1575858063808663552
I0930 14:56:09.710] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1575858063808663552/artifacts
W0930 14:56:11.001] CommandException: One or more URLs matched no objects.
E0930 14:56:11.311] Command failed
I0930 14:56:11.311] process 13103 exited with code 1 after 0.0m
W0930 14:56:11.311] Remote dir gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1575858063808663552/artifacts not exist yet
I0930 14:56:11.311] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1575858063808663552/artifacts
I0930 14:56:14.241] process 13243 exited with code 0 after 0.0m
I0930 14:56:14.242] Call:  git rev-parse HEAD
I0930 14:56:14.245] process 13898 exited with code 0 after 0.0m
... skipping 13 lines ...