error during ./hack/e2e-internal/e2e-status.sh: exit status 1
from junit_runner.xml
Filter through log files
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest diffResources
kubetest list nodes
kubetest listResources After
kubetest listResources Before
kubetest listResources Down
kubetest listResources Up
kubetest test setup
... skipping 407 lines ... W0129 11:54:42.994] Looking for address 'bootstrap-e2e-master-ip' W0129 11:54:42.994] Using master: bootstrap-e2e-master (external IP: 104.196.242.53; internal IP: (not set)) I0129 11:54:43.094] Group is stable I0129 11:54:43.094] Waiting up to 300 seconds for cluster initialization. I0129 11:54:48.021] I0129 11:54:48.021] This will continually check to see if the API for kubernetes is reachable. I0129 11:54:48.021] This may time out if there was some uncaught error during start up. I0129 11:54:48.021] I0129 11:55:24.363] ..........Kubernetes cluster created. I0129 11:55:24.504] Cluster "k8s-jkns-gce-sd-log_bootstrap-e2e" set. I0129 11:55:24.640] User "k8s-jkns-gce-sd-log_bootstrap-e2e" set. I0129 11:55:24.776] Context "k8s-jkns-gce-sd-log_bootstrap-e2e" created. I0129 11:55:24.912] Switched to context "k8s-jkns-gce-sd-log_bootstrap-e2e". ... skipping 22 lines ... I0129 11:56:04.133] bootstrap-e2e-minion-group-0114 Ready <none> 12s v1.27.0-alpha.1.73+8e642d3d0deab2 I0129 11:56:04.133] bootstrap-e2e-minion-group-kb2c Ready <none> 16s v1.27.0-alpha.1.73+8e642d3d0deab2 I0129 11:56:04.133] bootstrap-e2e-minion-group-w39g Ready <none> 13s v1.27.0-alpha.1.73+8e642d3d0deab2 I0129 11:56:04.133] Validate output: W0129 11:56:04.330] Warning: v1 ComponentStatus is deprecated in v1.19+ W0129 11:56:04.336] Done, listing cluster services: I0129 11:56:04.436] NAME STATUS MESSAGE ERROR I0129 11:56:04.605] etcd-1 Healthy {"health":"true","reason":""} I0129 11:56:04.605] controller-manager Healthy ok I0129 11:56:04.605] etcd-0 Healthy {"health":"true","reason":""} I0129 11:56:04.605] scheduler Healthy ok I0129 11:56:04.605] [0;32mCluster validation succeeded[0m I0129 11:56:04.606] [0;32mKubernetes control plane[0m is running at [0;33mhttps://104.196.242.53[0m ... skipping 48 lines ... W0129 11:56:38.050] Listed 0 items. W0129 11:56:39.540] 2023/01/29 11:56:39 process.go:155: Step './cluster/gce/list-resources.sh' finished in 18.64452404s W0129 11:56:39.593] 2023/01/29 11:56:39 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh W0129 11:56:39.593] Project: k8s-jkns-gce-sd-log W0129 11:56:39.593] Network Project: k8s-jkns-gce-sd-log W0129 11:56:39.594] Zone: us-west1-b W0129 11:56:39.772] error: couldn't read version from server: Get "https://104.196.242.53/version?timeout=32s": dial tcp 104.196.242.53:443: connect: connection refused W0129 11:56:39.777] 2023/01/29 11:56:39 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 236.74047ms W0129 11:56:39.844] 2023/01/29 11:56:39 e2e.go:572: Dumping logs locally to: /workspace/_artifacts W0129 11:56:39.844] 2023/01/29 11:56:39 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts W0129 11:56:39.844] Trying to find master named 'bootstrap-e2e-master' W0129 11:56:39.844] Looking for address 'bootstrap-e2e-master-ip' I0129 11:56:39.945] - IPProtocol: tcp ... skipping 37 lines ... W0129 11:57:57.519] Specify --start=68821 in the next get-serial-port-output invocation to get only the new output starting from here. W0129 11:57:57.519] scp: /var/log/cloud-controller-manager.log*: No such file or directory W0129 11:57:57.863] scp: /var/log/cluster-autoscaler.log*: No such file or directory W0129 11:57:58.021] scp: /var/log/fluentd.log*: No such file or directory W0129 11:57:58.026] scp: /var/log/kubelet.cov*: No such file or directory W0129 11:57:58.026] scp: /var/log/startupscript.log*: No such file or directory W0129 11:57:58.027] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. I0129 11:57:58.351] Dumping logs from nodes locally to '/workspace/_artifacts' I0129 11:59:14.440] Detecting nodes in the cluster I0129 11:59:14.440] Changing logfiles to be world-readable for download I0129 11:59:14.694] Changing logfiles to be world-readable for download I0129 11:59:14.736] Changing logfiles to be world-readable for download I0129 11:59:19.336] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-0114 ... skipping 6 lines ... W0129 11:59:21.004] W0129 11:59:23.639] Specify --start=74775 in the next get-serial-port-output invocation to get only the new output starting from here. W0129 11:59:23.639] scp: /var/log/fluentd.log*: No such file or directory W0129 11:59:23.646] scp: /var/log/node-problem-detector.log*: No such file or directory W0129 11:59:23.646] scp: /var/log/kubelet.cov*: No such file or directory W0129 11:59:23.646] scp: /var/log/startupscript.log*: No such file or directory W0129 11:59:23.646] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0129 11:59:23.970] scp: /var/log/fluentd.log*: No such file or directory W0129 11:59:23.971] scp: /var/log/node-problem-detector.log*: No such file or directory W0129 11:59:23.971] scp: /var/log/kubelet.cov*: No such file or directory W0129 11:59:23.971] scp: /var/log/startupscript.log*: No such file or directory W0129 11:59:23.976] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0129 11:59:23.992] scp: /var/log/fluentd.log*: No such file or directory W0129 11:59:23.997] scp: /var/log/node-problem-detector.log*: No such file or directory W0129 11:59:23.997] scp: /var/log/kubelet.cov*: No such file or directory W0129 11:59:23.998] scp: /var/log/startupscript.log*: No such file or directory W0129 11:59:23.998] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0129 11:59:30.201] INSTANCE_GROUPS=bootstrap-e2e-minion-group I0129 11:59:31.994] Failures for bootstrap-e2e-minion-group (if any): W0129 11:59:33.907] NODE_NAMES=bootstrap-e2e-minion-group-0114 bootstrap-e2e-minion-group-kb2c bootstrap-e2e-minion-group-w39g W0129 11:59:33.908] 2023/01/29 11:59:33 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 2m54.130043141s W0129 11:59:48.372] 2023/01/29 11:59:33 e2e.go:476: Listing resources... W0129 11:59:48.372] 2023/01/29 11:59:33 process.go:153: Running: ./cluster/gce/list-resources.sh ... skipping 77 lines ... W0129 12:07:08.174] Listed 0 items. W0129 12:07:10.096] Listed 0 items. W0129 12:07:11.732] 2023/01/29 12:07:11 process.go:155: Step './cluster/gce/list-resources.sh' finished in 18.410585553s W0129 12:07:11.733] 2023/01/29 12:07:11 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt W0129 12:07:11.734] 2023/01/29 12:07:11 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt' finished in 1.418976ms W0129 12:07:11.735] 2023/01/29 12:07:11 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W0129 12:07:11.744] 2023/01/29 12:07:11 main.go:328: Something went wrong: encountered 1 errors: [error during ./hack/e2e-internal/e2e-status.sh: exit status 1] W0129 12:07:11.748] Traceback (most recent call last): W0129 12:07:11.748] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module> W0129 12:07:11.748] main(parse_args()) W0129 12:07:11.748] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main W0129 12:07:11.748] mode.start(runner_args) W0129 12:07:11.748] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start W0129 12:07:11.748] check_env(env, self.command, *args) W0129 12:07:11.749] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W0129 12:07:11.749] subprocess.check_call(cmd, env=env) W0129 12:07:11.749] File "/usr/lib/python3.9/subprocess.py", line 373, in check_call W0129 12:07:11.749] raise CalledProcessError(retcode, cmd) W0129 12:07:11.761] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-zone=us-west1-b', '--test_args=--ginkgo.focus=\\[Feature:Reboot\\] --minStartupPods=8', '--timeout=180m')' returned non-zero exit status 1. I0129 12:07:11.762] Done E0129 12:07:11.762] Command failed I0129 12:07:11.762] process 265 exited with code 1 after 16.5m E0129 12:07:11.762] FAIL: ci-cri-containerd-e2e-cos-gce-reboot I0129 12:07:11.763] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0129 12:07:12.705] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0129 12:07:12.920] process 10631 exited with code 0 after 0.0m I0129 12:07:12.920] Call: gcloud config get-value account I0129 12:07:13.921] process 10641 exited with code 0 after 0.0m I0129 12:07:13.921] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0129 12:07:13.921] Upload result and artifacts... I0129 12:07:13.921] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1619664241797632000 I0129 12:07:13.922] Call: gsutil ls gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1619664241797632000/artifacts W0129 12:07:15.480] CommandException: One or more URLs matched no objects. E0129 12:07:15.837] Command failed I0129 12:07:15.837] process 10651 exited with code 1 after 0.0m W0129 12:07:15.837] Remote dir gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1619664241797632000/artifacts not exist yet I0129 12:07:15.837] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-reboot/1619664241797632000/artifacts I0129 12:07:19.550] process 10785 exited with code 0 after 0.1m I0129 12:07:19.551] Call: git rev-parse HEAD I0129 12:07:19.556] process 11434 exited with code 0 after 0.0m ... skipping 13 lines ...