PR | mhmxs: Use the same way to create tmp directories at hacks |
Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 33m46s |
Revision | |
Builder | 80de1514-b694-11ed-b119-ea623b75dbec |
Refs |
master:015e2fa2 116044:245094b0 |
infra-commit | e6e6a8aa3 |
job-version | v1.27.0-alpha.2.288+62caddc01e4f34-dirty |
kubetest-version | v20230207-192d5afee3 |
repo | k8s.io/kubernetes |
repo-commit | 62caddc01e4f34a89cd7bbbc09b5d4bb56fb4df1 |
repos | {u'k8s.io/kubernetes': u'master:015e2fa20c2e08d78e5772c2d0eea00500843e4b,116044:245094b0b00b44beee0becde87614420a8cef368'} |
revision | v1.27.0-alpha.2.288+62caddc01e4f34-dirty |
error during /go/src/k8s.io/kubernetes/hack/local-up-cluster.sh: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Build
kubetest Deferred TearDown
kubetest DumpClusterLogs (--up failed)
kubetest GetDeployer
kubetest Prepare
kubetest TearDown Previous
kubetest Timeout
... skipping 250 lines ... W0227 12:07:09.881] 2023/02/27 12:07:09 process.go:153: Running: sh -c docker ps -aq | xargs docker rm -fv I0227 12:07:15.919] 2200499feb98 W0227 12:07:16.020] 2023/02/27 12:07:15 process.go:155: Step 'sh -c docker ps -aq | xargs docker rm -fv' finished in 6.043182762s I0227 12:07:16.120] make: Entering directory '/go/src/k8s.io/kubernetes' W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:153: Running: pkill -f cloud-controller-manager W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:155: Step 'pkill -f cloud-controller-manager' finished in 23.445172ms W0227 12:07:20.996] 2023/02/27 12:07:15 local.go:189: unable to kill kubernetes process "cloud-controller-manager": error during pkill -f cloud-controller-manager: exit status 1 W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:153: Running: pkill -f kube-controller-manager W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:155: Step 'pkill -f kube-controller-manager' finished in 3.163838ms W0227 12:07:20.996] 2023/02/27 12:07:15 local.go:189: unable to kill kubernetes process "kube-controller-manager": error during pkill -f kube-controller-manager: exit status 1 W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:153: Running: pkill -f kube-proxy W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:155: Step 'pkill -f kube-proxy' finished in 2.961493ms W0227 12:07:20.996] 2023/02/27 12:07:15 local.go:189: unable to kill kubernetes process "kube-proxy": error during pkill -f kube-proxy: exit status 1 W0227 12:07:20.996] 2023/02/27 12:07:15 process.go:153: Running: pkill -f kube-scheduler W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:155: Step 'pkill -f kube-scheduler' finished in 2.890525ms W0227 12:07:20.997] 2023/02/27 12:07:15 local.go:189: unable to kill kubernetes process "kube-scheduler": error during pkill -f kube-scheduler: exit status 1 W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:153: Running: pkill -f kube-apiserver W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:155: Step 'pkill -f kube-apiserver' finished in 2.783075ms W0227 12:07:20.997] 2023/02/27 12:07:15 local.go:189: unable to kill kubernetes process "kube-apiserver": error during pkill -f kube-apiserver: exit status 1 W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:153: Running: pkill -f kubelet W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:155: Step 'pkill -f kubelet' finished in 2.736534ms W0227 12:07:20.997] 2023/02/27 12:07:15 local.go:189: unable to kill kubernetes process "kubelet": error during pkill -f kubelet: exit status 1 W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:153: Running: pkill etcd W0227 12:07:20.997] 2023/02/27 12:07:15 process.go:155: Step 'pkill etcd' finished in 2.907831ms W0227 12:07:20.997] 2023/02/27 12:07:15 local.go:193: unable to kill etcd: error during pkill etcd: exit status 1 W0227 12:07:20.998] 2023/02/27 12:07:15 local.go:107: using 172.17.0.1 for API_HOST_IP, HOSTNAME_OVERRIDE, KUBELET_HOST W0227 12:07:20.998] 2023/02/27 12:07:15 process.go:153: Running: /go/src/k8s.io/kubernetes/hack/local-up-cluster.sh W0227 12:07:20.998] go version go1.20.1 linux/amd64 I0227 12:07:22.726] +++ [0227 12:07:22] Building go targets for linux/amd64 I0227 12:07:22.756] k8s.io/kubernetes/cmd/kubectl (static) I0227 12:07:22.764] k8s.io/kubernetes/cmd/kube-apiserver (static) ... skipping 231 lines ... W0227 12:20:48.586] 2023/02/27 12:20:44 [INFO] encoded CSR W0227 12:20:48.586] 2023/02/27 12:20:44 [INFO] signed certificate with serial number 191510510904626786689863370300551217329516490720 W0227 12:20:48.586] 2023/02/27 12:20:44 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for W0227 12:20:48.586] websites. For more information see the Baseline Requirements for the Issuance and Management W0227 12:20:48.586] of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); W0227 12:20:48.587] specifically, section 10.2.3 ("Information Requirements"). W0227 12:20:48.587] error: current-context must exist in order to minify W0227 12:20:48.587] error: current-context must exist in order to minify W0227 12:20:48.681] chown: invalid user: ‘prow’ W0227 12:20:48.681] error: current-context must exist in order to minify W0227 12:20:48.782] error: current-context must exist in order to minify W0227 12:20:48.881] error: couldn't read version from server: Get "http://localhost:8080/version?timeout=32s": dial tcp [::1]:8080: connect: connection refused W0227 12:20:48.973] error: couldn't read version from server: Get "http://localhost:8080/version?timeout=32s": dial tcp [::1]:8080: connect: connection refused I0227 12:20:49.073] Cluster "local-up-cluster" set. I0227 12:20:49.074] use 'kubectl --kubeconfig=/var/run/kubernetes/admin-kube-aggregator.kubeconfig' to use the aggregated API server W0227 12:20:49.182] error: no openapi getter W0227 12:20:49.187] 2023/02/27 12:20:49 process.go:155: Step '/go/src/k8s.io/kubernetes/hack/local-up-cluster.sh' finished in 13m33.216818496s W0227 12:20:49.188] 2023/02/27 12:20:49 process.go:153: Running: cp -r /tmp/kubetest-local4012626909 /workspace/_artifacts W0227 12:20:49.191] 2023/02/27 12:20:49 process.go:155: Step 'cp -r /tmp/kubetest-local4012626909 /workspace/_artifacts' finished in 3.736589ms W0227 12:20:49.192] 2023/02/27 12:20:49 process.go:153: Running: sh -c docker ps -aq | xargs docker rm -fv I0227 12:20:49.292] Something is wrong with your DNS input I0227 12:20:49.292] # Warning: This is a file generated from the base underscore template file: coredns.yaml.base ... skipping 174 lines ... W0227 12:20:49.636] See 'docker rm --help'. W0227 12:20:49.637] W0227 12:20:49.637] Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...] W0227 12:20:49.637] W0227 12:20:49.637] Remove one or more containers W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:155: Step 'sh -c docker ps -aq | xargs docker rm -fv' finished in 131.034271ms W0227 12:20:49.637] 2023/02/27 12:20:49 local.go:181: unable to cleanup containers in docker: error during sh -c docker ps -aq | xargs docker rm -fv: exit status 123 W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:153: Running: pkill -f cloud-controller-manager W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:155: Step 'pkill -f cloud-controller-manager' finished in 4.018051ms W0227 12:20:49.637] 2023/02/27 12:20:49 local.go:189: unable to kill kubernetes process "cloud-controller-manager": error during pkill -f cloud-controller-manager: exit status 1 W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:153: Running: pkill -f kube-controller-manager W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:155: Step 'pkill -f kube-controller-manager' finished in 4.199905ms W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:153: Running: pkill -f kube-proxy W0227 12:20:49.637] 2023/02/27 12:20:49 process.go:155: Step 'pkill -f kube-proxy' finished in 4.056823ms W0227 12:20:49.638] 2023/02/27 12:20:49 local.go:189: unable to kill kubernetes process "kube-proxy": error during pkill -f kube-proxy: exit status 1 W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:153: Running: pkill -f kube-scheduler W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:155: Step 'pkill -f kube-scheduler' finished in 3.628793ms W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:153: Running: pkill -f kube-apiserver W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:155: Step 'pkill -f kube-apiserver' finished in 5.401045ms W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:153: Running: pkill -f kubelet W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:155: Step 'pkill -f kubelet' finished in 4.474291ms W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:153: Running: pkill etcd W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:155: Step 'pkill etcd' finished in 3.527429ms W0227 12:20:49.638] 2023/02/27 12:20:49 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W0227 12:20:49.639] 2023/02/27 12:20:49 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" W0227 12:20:49.639] 2023/02/27 12:20:49 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 273.598953ms W0227 12:20:49.641] 2023/02/27 12:20:49 main.go:328: Something went wrong: starting e2e cluster: error during /go/src/k8s.io/kubernetes/hack/local-up-cluster.sh: exit status 1 W0227 12:20:49.641] Traceback (most recent call last): W0227 12:20:49.644] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module> W0227 12:20:49.644] main(parse_args()) W0227 12:20:49.645] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main W0227 12:20:49.646] mode.start(runner_args) W0227 12:20:49.646] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start ... skipping 48 lines ... I0227 12:20:49.668] - name: dns-tcp I0227 12:20:49.668] port: 53 I0227 12:20:49.668] protocol: TCP I0227 12:20:49.668] - name: metrics I0227 12:20:49.668] port: 9153 I0227 12:20:49.668] protocol: TCP E0227 12:20:49.668] Command failed I0227 12:20:49.668] process 669 exited with code 1 after 31.9m E0227 12:20:49.668] FAIL: pull-kubernetes-local-e2e I0227 12:20:49.669] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0227 12:20:51.648] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0227 12:20:51.860] process 149379 exited with code 0 after 0.0m I0227 12:20:51.861] Call: gcloud config get-value account I0227 12:20:52.880] process 149389 exited with code 0 after 0.0m I0227 12:20:52.880] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0227 12:20:52.880] Upload result and artifacts... I0227 12:20:52.880] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/116044/pull-kubernetes-local-e2e/1630172749123031040 I0227 12:20:52.881] Call: gsutil ls gs://kubernetes-jenkins/pr-logs/pull/116044/pull-kubernetes-local-e2e/1630172749123031040/artifacts W0227 12:20:55.052] CommandException: One or more URLs matched no objects. E0227 12:20:55.429] Command failed I0227 12:20:55.429] process 149399 exited with code 1 after 0.0m W0227 12:20:55.430] Remote dir gs://kubernetes-jenkins/pr-logs/pull/116044/pull-kubernetes-local-e2e/1630172749123031040/artifacts not exist yet I0227 12:20:55.430] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/116044/pull-kubernetes-local-e2e/1630172749123031040/artifacts I0227 12:20:58.690] process 149533 exited with code 0 after 0.1m I0227 12:20:58.691] Call: git rev-parse HEAD I0227 12:20:58.696] process 150052 exited with code 0 after 0.0m ... skipping 21 lines ...