This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2022-09-28 01:50
Elapsed6m11s
Revision
Builderf052df19-3ecf-11ed-9d09-ee6fcf89e9cd
infra-commit753ed0abb
job-versionv1.26.0-alpha.1.103+79d6053e6d9846
kubetest-versionv20220922-dcf27e1579
repogithub.com/containerd/containerd
repo-commit34d078e99fbdb8c28feec359634335a2c684e703
repos{u'github.com/containerd/containerd': u'main'}
revisionv1.26.0-alpha.1.103+79d6053e6d9846

Test Failures


kubetest IsUp 0.27s

error during ./hack/e2e-internal/e2e-status.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 217 lines ...
I0928 01:51:28.488] Extracting /workspace/github.com/containerd/containerd/kubernetes/test/kubernetes-test-linux-amd64.tar.gz into /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64
W0928 01:51:38.184] 2022/09/28 01:51:38 process.go:155: Step '/workspace/get-kube.sh' finished in 32.507318854s
W0928 01:51:38.185] 2022/09/28 01:51:38 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0928 01:51:38.253] Project: k8s-jkns-e2e-gci-gce-soak-1-4
W0928 01:51:38.253] Network Project: k8s-jkns-e2e-gci-gce-soak-1-4
W0928 01:51:38.254] Zone: us-west1-b
W0928 01:51:38.454] error: couldn't read version from server: Get "https://34.168.183.1/version?timeout=32s": dial tcp 34.168.183.1:443: connect: connection refused
W0928 01:51:38.459] 2022/09/28 01:51:38 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 274.366494ms
W0928 01:51:38.459] 2022/09/28 01:51:38 e2e.go:568: Dumping logs locally to: /workspace/_artifacts
W0928 01:51:38.459] 2022/09/28 01:51:38 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0928 01:51:38.541] Trying to find master named 'bootstrap-e2e-master'
W0928 01:51:38.541] Looking for address 'bootstrap-e2e-master-ip'
I0928 01:51:38.642] Checking for custom logdump instances, if any
... skipping 14 lines ...
W0928 01:53:06.394] 
W0928 01:53:06.395] Specify --start=1202684 in the next get-serial-port-output invocation to get only the new output starting from here.
W0928 01:54:13.527] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0928 01:54:16.274] scp: /var/log/fluentd.log*: No such file or directory
W0928 01:54:16.274] scp: /var/log/kubelet.cov*: No such file or directory
W0928 01:54:16.274] scp: /var/log/startupscript.log*: No such file or directory
W0928 01:54:16.291] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0928 01:54:16.607] Dumping logs from nodes locally to '/workspace/_artifacts'
I0928 01:54:16.607] Detecting nodes in the cluster
I0928 01:55:46.725] Changing logfiles to be world-readable for download
I0928 01:55:48.376] Changing logfiles to be world-readable for download
I0928 01:55:49.727] Changing logfiles to be world-readable for download
I0928 01:55:51.370] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-tvz5
... skipping 6 lines ...
W0928 01:55:55.439] 
W0928 01:55:55.440] Specify --start=2489643 in the next get-serial-port-output invocation to get only the new output starting from here.
W0928 01:55:55.508] scp: /var/log/fluentd.log*: No such file or directory
W0928 01:55:55.508] scp: /var/log/node-problem-detector.log*: No such file or directory
W0928 01:55:55.508] scp: /var/log/kubelet.cov*: No such file or directory
W0928 01:55:55.509] scp: /var/log/startupscript.log*: No such file or directory
W0928 01:55:55.514] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0928 01:55:56.955] scp: /var/log/fluentd.log*: No such file or directory
W0928 01:55:56.956] scp: /var/log/node-problem-detector.log*: No such file or directory
W0928 01:55:56.956] scp: /var/log/kubelet.cov*: No such file or directory
W0928 01:55:56.956] scp: /var/log/startupscript.log*: No such file or directory
W0928 01:55:56.963] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0928 01:55:58.209] scp: /var/log/fluentd.log*: No such file or directory
W0928 01:55:58.209] scp: /var/log/node-problem-detector.log*: No such file or directory
W0928 01:55:58.210] scp: /var/log/kubelet.cov*: No such file or directory
W0928 01:55:58.210] scp: /var/log/startupscript.log*: No such file or directory
W0928 01:55:58.214] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0928 01:56:03.646] INSTANCE_GROUPS=bootstrap-e2e-minion-group
W0928 01:56:03.646] NODE_NAMES=bootstrap-e2e-minion-group-dq6x bootstrap-e2e-minion-group-nf57 bootstrap-e2e-minion-group-tvz5
I0928 01:56:05.053] Failures for bootstrap-e2e-minion-group (if any):
W0928 01:56:07.264] 2022/09/28 01:56:07 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 4m28.804965495s
W0928 01:56:07.265] 2022/09/28 01:56:07 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0928 01:56:07.265] 2022/09/28 01:56:07 main.go:331: Something went wrong: encountered 1 errors: [error during ./hack/e2e-internal/e2e-status.sh: exit status 1]
W0928 01:56:07.271] Traceback (most recent call last):
W0928 01:56:07.272]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0928 01:56:07.272]     main(parse_args())
W0928 01:56:07.272]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0928 01:56:07.272]     mode.start(runner_args)
W0928 01:56:07.272]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0928 01:56:07.272]     check_env(env, self.command, *args)
W0928 01:56:07.273]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0928 01:56:07.273]     subprocess.check_call(cmd, env=env)
W0928 01:56:07.273]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0928 01:56:07.273]     raise CalledProcessError(retcode, cmd)
W0928 01:56:07.274] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--test', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--extract=ci/latest', '--gcp-master-image=gci', '--gcp-node-image=gci', '--gcp-project=k8s-jkns-e2e-gci-gce-soak-1-4', '--gcp-zone=us-west1-b', '--save=gs://kubernetes-e2e-soak-configs/ci-containerd-soak-gci-gce', '--soak', '--test_args=--ginkgo.skip=\\[Driver:.gcepd\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\] --clean-start=true --minStartupPods=8', '--timeout=1200m')' returned non-zero exit status 1
E0928 01:56:07.276] Command failed
I0928 01:56:07.276] process 289 exited with code 1 after 5.2m
E0928 01:56:07.276] FAIL: ci-containerd-soak-cos-gce
I0928 01:56:07.277] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0928 01:56:08.076] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0928 01:56:08.245] process 3765 exited with code 0 after 0.0m
I0928 01:56:08.245] Call:  gcloud config get-value account
I0928 01:56:08.998] process 3779 exited with code 0 after 0.0m
I0928 01:56:08.998] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0928 01:56:08.998] Upload result and artifacts...
I0928 01:56:08.998] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-containerd-soak-cos-gce/1574939586138738688
I0928 01:56:08.999] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-containerd-soak-cos-gce/1574939586138738688/artifacts
W0928 01:56:10.272] CommandException: One or more URLs matched no objects.
E0928 01:56:10.594] Command failed
I0928 01:56:10.594] process 3793 exited with code 1 after 0.0m
W0928 01:56:10.594] Remote dir gs://kubernetes-jenkins/logs/ci-containerd-soak-cos-gce/1574939586138738688/artifacts not exist yet
I0928 01:56:10.595] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-containerd-soak-cos-gce/1574939586138738688/artifacts
I0928 01:56:46.108] process 3933 exited with code 0 after 0.6m
I0928 01:56:46.109] Call:  git rev-parse HEAD
I0928 01:56:46.113] process 4600 exited with code 0 after 0.0m
... skipping 13 lines ...