This job view page is being replaced by Spyglass soon. Check out the new job view.
PRaojea: cluster: fix metrics-server deployment on CI jobs
ResultABORTED
Tests 0 failed / 0 succeeded
Started2021-07-21 21:51
Elapsed43m34s
Revisionfb2f0d29d026e5cc47372313ab5840050bafde54
Refs 103713

No Test Failures!


Error lines from build-log.txt

... skipping 681 lines ...
Looking for address 'e2e-103713-95a39-master-ip'
Looking for address 'e2e-103713-95a39-master-internal-ip'
Using master: e2e-103713-95a39-master (external IP: 34.73.223.171; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-26_e2e-103713-95a39" set.
User "k8s-infra-e2e-boskos-scale-26_e2e-103713-95a39" set.
Context "k8s-infra-e2e-boskos-scale-26_e2e-103713-95a39" created.
Switched to context "k8s-infra-e2e-boskos-scale-26_e2e-103713-95a39".
... skipping 228 lines ...
e2e-103713-95a39-minion-group-xqxb   Ready                         <none>   55s   v1.23.0-alpha.0.5+b9a647106916be
e2e-103713-95a39-minion-group-z348   Ready                         <none>   51s   v1.23.0-alpha.0.5+b9a647106916be
e2e-103713-95a39-minion-group-znsb   Ready                         <none>   54s   v1.23.0-alpha.0.5+b9a647106916be
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.73.223.171
GLBCDefaultBackend is running at https://34.73.223.171/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.73.223.171/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.73.223.171/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 292 lines ...
I0721 22:19:14.266138  105311 prometheus.go:274] Exposing kube-apiserver metrics in the cluster
I0721 22:19:14.409136  105311 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
I0721 22:19:14.447468  105311 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
I0721 22:19:14.484871  105311 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
I0721 22:19:14.523304  105311 prometheus.go:353] Waiting for Prometheus stack to become healthy...
I0721 22:19:44.582282  105311 util.go:93] 4/7 targets are ready, example not ready target: {map[container:prometheus endpoint:web instance:10.64.27.3:9090 job:prometheus-k8s namespace:monitoring pod:prometheus-k8s-0 service:prometheus-k8s] unknown}
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-07-21T22:19:56Z"}
++ early_exit_handler
++ '[' -n 186 ']'
++ kill -TERM 186
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 4 lines ...