This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultABORTED
Tests 0 failed / 0 succeeded
Started2021-06-09 22:44
Elapsed38m27s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 346 lines ...
NAME                 ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
kubemark-500-master  us-central1-f  n1-standard-4               10.40.0.3    35.202.234.251  RUNNING
Setting kubemark-500-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2)
Updating network interface [nic0] of instance [kubemark-500-master]...
..................done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-blocking-kubemark/zones/us-central1-f/instances/kubemark-500-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-500-master despite 5 attempts
Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating nodes.
Using subnet kubemark-500-custom-subnet
Attempt 1 to create kubemark-500-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-blocking-kubemark/global/instanceTemplates/kubemark-500-minion-template].
NAME                          MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
... skipping 16 lines ...
Looking for address 'kubemark-500-master-ip'
Looking for address 'kubemark-500-master-internal-ip'
Using master: kubemark-500-master (external IP: 35.202.234.251; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-jenkins-blocking-kubemark_kubemark-500" set.
User "k8s-jenkins-blocking-kubemark_kubemark-500" set.
Context "k8s-jenkins-blocking-kubemark_kubemark-500" created.
Switched to context "k8s-jenkins-blocking-kubemark_kubemark-500".
... skipping 44 lines ...
kubemark-500-minion-group-m074   Ready                         <none>   43s   v1.22.0-alpha.3.49+90132378f082d8
kubemark-500-minion-group-tf50   Ready                         <none>   45s   v1.22.0-alpha.3.49+90132378f082d8
kubemark-500-minion-group-zmwb   Ready                         <none>   43s   v1.22.0-alpha.3.49+90132378f082d8
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
etcd-1               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://35.202.234.251
GLBCDefaultBackend is running at https://35.202.234.251/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://35.202.234.251/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.202.234.251/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 191 lines ...
NAME                          ZONE           MACHINE_TYPE    PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS
kubemark-500-kubemark-master  us-central1-f  n1-standard-16               10.40.0.13   34.69.89.56  RUNNING
Setting kubemark-500-kubemark-master's aliases to '10.40.0.12/32' (added 10.40.0.12)
Updating network interface [nic0] of instance [kubemark-500-kubemark-master]...
....................done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-blocking-kubemark/zones/us-central1-f/instances/kubemark-500-kubemark-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-500-kubemark-master despite 5 attempts
Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-blocking-kubemark/global/firewalls/kubemark-500-kubemark-minion-all].
NAME                              NETWORK       DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-500-kubemark-minion-all  kubemark-500  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Creating nodes.
... skipping 15 lines ...
Looking for address 'kubemark-500-kubemark-master-ip'
Looking for address 'kubemark-500-kubemark-master-internal-ip'
Using master: kubemark-500-kubemark-master (external IP: 34.69.89.56; internal IP: 10.40.0.12)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.....Kubernetes cluster created.
Cluster "k8s-jenkins-blocking-kubemark_kubemark-500-kubemark" set.
User "k8s-jenkins-blocking-kubemark_kubemark-500-kubemark" set.
Context "k8s-jenkins-blocking-kubemark_kubemark-500-kubemark" created.
Switched to context "k8s-jenkins-blocking-kubemark_kubemark-500-kubemark".
... skipping 24 lines ...
No resources found
Found 0 node(s).
No resources found
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.69.89.56

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

... skipping 731 lines ...
I0609 23:02:14.028526   47421 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-windows-scrape-configs.yaml
I0609 23:02:14.035177   47421 prometheus.go:264] Exposing kube-apiserver metrics in the cluster
I0609 23:02:14.049261   47421 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
I0609 23:02:14.054923   47421 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
I0609 23:02:14.059150   47421 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
I0609 23:02:14.065562   47421 prometheus.go:343] Waiting for Prometheus stack to become healthy...
W0609 23:02:44.071948   47421 util.go:64] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090), response: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12g\n\x06\n\x00\x12\x00\x1a\x00\x12\aFailure\x1a=no endpoints available for service \"http:prometheus-k8s:9090\"\"\x12ServiceUnavailable0\xf7\x03\x1a\x00\"\x00"
I0609 23:03:14.084713   47421 util.go:96] All 7 expected targets are ready
I0609 23:03:14.098383   47421 util.go:96] All 1 expected targets are ready
I0609 23:03:14.098427   47421 prometheus.go:218] Prometheus stack set up successfully
I0609 23:03:14.098456   47421 exec_service.go:62] Exec service: setting up service!
I0609 23:03:14.113405   47421 framework.go:239] Applying pkg/execservice/manifest/exec_deployment.yaml
I0609 23:03:14.128900   47421 reflector.go:175] Starting reflector *v1.Pod (0s) from *v1.PodStore: namespace(cluster-loader), labelSelector(feature = exec)
... skipping 28789 lines ...
I0609 23:21:43.103351   47421 wait_for_pods.go:94] WaitForControlledPodsRunning: namespace(test-yr17ca-1), labelSelector(group=access-tokens,name=account-77): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0609 23:21:43.103468   47421 reflector.go:181] Stopping reflector *v1.Pod (0s) from *v1.PodStore: namespace(test-yr17ca-1), labelSelector(group=access-tokens,name=account-77)
I0609 23:21:43.109595   47421 wait_for_pods.go:94] WaitForControlledPodsRunning: namespace(test-yr17ca-1), labelSelector(group=access-tokens,name=account-78): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0609 23:21:43.109692   47421 reflector.go:181] Stopping reflector *v1.Pod (0s) from *v1.PodStore: namespace(test-yr17ca-1), labelSelector(group=access-tokens,name=account-78)
I0609 23:21:43.117695   47421 wait_for_pods.go:94] WaitForControlledPodsRunning: namespace(test-yr17ca-1), labelSelector(group=access-tokens,name=account-79): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0609 23:21:43.117842   47421 reflector.go:181] Stopping reflector *v1.Pod (0s) from *v1.PodStore: namespace(test-yr17ca-1), labelSelector(group=access-tokens,name=account-79)
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-06-09T23:21:44Z"}
+++ early_exit_handler
+++ cleanup_dind
+++ [[ true == \t\r\u\e ]]
+++ echo 'Cleaning up after docker'
Cleaning up after docker
+++ docker ps -aq
... skipping 2 lines ...