error during ./hack/e2e-internal/e2e-up.sh: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Deferred TearDown
kubetest DumpClusterLogs (--up failed)
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest TearDown Previous
kubetest Timeout
... skipping 15 lines ... I0323 10:27:23.845] process 51 exited with code 0 after 0.0m I0323 10:27:23.845] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0323 10:27:23.846] Root: /workspace I0323 10:27:23.846] cd to /workspace I0323 10:27:23.846] Configure environment... I0323 10:27:23.846] Call: git show -s --format=format:%ct HEAD W0323 10:27:23.850] fatal: not a git repository (or any of the parent directories): .git I0323 10:27:23.850] process 61 exited with code 128 after 0.0m W0323 10:27:23.850] Unable to print commit date for HEAD I0323 10:27:23.851] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0323 10:27:24.644] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0323 10:27:24.856] process 62 exited with code 0 after 0.0m I0323 10:27:24.857] Call: gcloud config get-value account ... skipping 333 lines ... W0323 10:30:53.685] Looking for address 'ca-master-ip' W0323 10:30:53.685] Using master: ca-master (external IP: 34.105.34.62; internal IP: (not set)) I0323 10:30:53.785] Group is stable I0323 10:30:53.786] Waiting up to 300 seconds for cluster initialization. I0323 10:30:58.709] I0323 10:30:58.710] This will continually check to see if the API for kubernetes is reachable. I0323 10:30:58.710] This may time out if there was some uncaught error during start up. I0323 10:30:58.710] I0323 10:35:55.030] ............................................................................................................................................Checking for custom logdump instances, if any I0323 10:35:55.045] ---------------------------------------------------------------------------------------------------- I0323 10:35:55.057] k/k version of the log-dump.sh script is deprecated! I0323 10:35:55.058] Please migrate your test job to use test-infra's repo version of log-dump.sh! I0323 10:35:55.058] Migration steps can be found in the readme file. I0323 10:35:55.058] ---------------------------------------------------------------------------------------------------- I0323 10:35:55.058] Sourcing kube-util.sh W0323 10:35:55.162] [0;31mCluster failed to initialize within 300 seconds.[0m W0323 10:35:55.202] Last output from querying API server follows: W0323 10:35:55.202] ----------------------------------------------------- W0323 10:35:55.203] * Trying 34.105.34.62:443... W0323 10:35:55.203] * connect to 34.105.34.62 port 443 failed: Connection refused W0323 10:35:55.203] * Failed to connect to 34.105.34.62 port 443: Connection refused W0323 10:35:55.209] * Closing connection 0 W0323 10:35:55.210] curl: (7) Failed to connect to 34.105.34.62 port 443: Connection refused W0323 10:35:55.210] ----------------------------------------------------- W0323 10:35:55.210] 2023/03/23 10:35:55 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 7m21.65602929s W0323 10:35:55.211] 2023/03/23 10:35:55 e2e.go:572: Dumping logs locally to: /workspace/_artifacts W0323 10:35:55.211] 2023/03/23 10:35:55 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts W0323 10:35:55.211] Trying to find master named 'ca-master' W0323 10:35:55.212] Looking for address 'ca-master-ip' ... skipping 11 lines ... W0323 10:37:02.279] scp: /var/log/glbc.log*: No such file or directory W0323 10:37:02.279] scp: /var/log/cluster-autoscaler.log*: No such file or directory W0323 10:37:02.357] scp: /var/log/kube-addon-manager.log*: No such file or directory W0323 10:37:02.357] scp: /var/log/fluentd.log*: No such file or directory W0323 10:37:02.362] scp: /var/log/kubelet.cov*: No such file or directory W0323 10:37:02.362] scp: /var/log/startupscript.log*: No such file or directory W0323 10:37:02.363] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. I0323 10:37:02.579] Dumping logs from nodes locally to '/workspace/_artifacts' I0323 10:38:03.157] Detecting nodes in the cluster I0323 10:38:03.157] Changing logfiles to be world-readable for download I0323 10:38:07.589] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from ca-minion-group-1-tq84 W0323 10:38:08.820] W0323 10:38:11.172] Specify --start=111270 in the next get-serial-port-output invocation to get only the new output starting from here. W0323 10:38:11.173] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory W0323 10:38:11.173] scp: /var/log/fluentd.log*: No such file or directory W0323 10:38:11.173] scp: /var/log/node-problem-detector.log*: No such file or directory W0323 10:38:11.173] scp: /var/log/kubelet.cov*: No such file or directory W0323 10:38:11.173] scp: /var/log/startupscript.log*: No such file or directory W0323 10:38:11.178] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0323 10:38:17.978] INSTANCE_GROUPS=ca-minion-group ca-minion-group-1 I0323 10:38:19.453] Failures for ca-minion-group (if any): I0323 10:38:22.351] Failures for ca-minion-group-1 (if any): W0323 10:38:23.838] NODE_NAMES=ca-minion-group-1-tq84 W0323 10:38:23.838] 2023/03/23 10:38:23 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 2m28.824198928s W0323 10:38:23.886] 2023/03/23 10:38:23 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh ... skipping 51 lines ... I0323 10:41:31.002] Cleared config for k8s-jkns-gci-autoscaling-migs_ca from /workspace/.kube/config I0323 10:41:31.002] Done W0323 10:41:31.017] W0323 10:41:30.998685 8381 loader.go:222] Config not found: /workspace/.kube/config W0323 10:41:31.018] W0323 10:41:30.998843 8381 loader.go:222] Config not found: /workspace/.kube/config W0323 10:41:31.018] 2023/03/23 10:41:31 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 3m7.164749049s W0323 10:41:31.018] 2023/03/23 10:41:31 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W0323 10:41:31.018] 2023/03/23 10:41:31 main.go:328: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2 W0323 10:41:31.018] Traceback (most recent call last): W0323 10:41:31.018] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module> W0323 10:41:31.018] main(parse_args()) W0323 10:41:31.018] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main W0323 10:41:31.018] mode.start(runner_args) W0323 10:41:31.018] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start W0323 10:41:31.018] check_env(env, self.command, *args) W0323 10:41:31.019] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W0323 10:41:31.019] subprocess.check_call(cmd, env=env) W0323 10:41:31.019] File "/usr/lib/python3.9/subprocess.py", line 373, in check_call W0323 10:41:31.019] raise CalledProcessError(retcode, cmd) W0323 10:41:31.019] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=ca', '--gcp-network=ca', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-nodes=3', '--gcp-project=k8s-jkns-gci-autoscaling-migs', '--gcp-zone=us-west1-b', '--runtime-config=scheduling.k8s.io/v1alpha1=true', '--test_args=--ginkgo.focus=\\[Feature:ClusterSizeAutoscalingScaleUp\\]|\\[Feature:ClusterSizeAutoscalingScaleDown\\] --ginkgo.skip=\\[Flaky\\] --minStartupPods=8', '--timeout=300m')' returned non-zero exit status 1. E0323 10:41:31.019] Command failed I0323 10:41:31.019] process 248 exited with code 1 after 14.1m E0323 10:41:31.019] FAIL: ci-kubernetes-e2e-gci-gce-autoscaling-migs I0323 10:41:31.020] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0323 10:41:31.810] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0323 10:41:31.923] process 8390 exited with code 0 after 0.0m I0323 10:41:31.923] Call: gcloud config get-value account I0323 10:41:32.787] process 8400 exited with code 0 after 0.0m I0323 10:41:32.788] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0323 10:41:32.788] Upload result and artifacts... I0323 10:41:32.788] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs/1638849895924240384 I0323 10:41:32.789] Call: gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs/1638849895924240384/artifacts W0323 10:41:34.204] CommandException: One or more URLs matched no objects. E0323 10:41:34.411] Command failed I0323 10:41:34.411] process 8410 exited with code 1 after 0.0m W0323 10:41:34.411] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs/1638849895924240384/artifacts not exist yet I0323 10:41:34.412] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs/1638849895924240384/artifacts I0323 10:41:36.916] process 8544 exited with code 0 after 0.0m I0323 10:41:36.916] Call: git rev-parse HEAD W0323 10:41:36.920] fatal: not a git repository (or any of the parent directories): .git E0323 10:41:36.921] Command failed I0323 10:41:36.921] process 9135 exited with code 128 after 0.0m I0323 10:41:36.921] Call: git rev-parse HEAD I0323 10:41:36.925] process 9136 exited with code 0 after 0.0m I0323 10:41:36.925] Call: gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs/jobResultsCache.json I0323 10:41:38.507] process 9137 exited with code 0 after 0.0m I0323 10:41:38.508] Call: gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs/jobResultsCache.json#1679566584503279' ... skipping 8 lines ...