This job view page is being replaced by Spyglass soon. Check out the new job view.
PRadtac: scheduler CC: add v1beta2 API, deprecate plugins
ResultABORTED
Tests 0 failed / 41 succeeded
Started2021-06-02 05:14
Elapsed53m11s
Revisiond88a5ec06abb88a3faca25f810e039cbc92ff8c6
Refs 99597
job-versionv1.22.0-alpha.2.317+3cd151f993f033
kubetest-version
revisionv1.22.0-alpha.2.317+3cd151f993f033

No Test Failures!


Show 41 Passed Tests

Error lines from build-log.txt

... skipping 641 lines ...
NAME                    ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
e2e-99597-95a39-master  us-east1-b  n1-standard-8               10.40.0.3    34.74.192.210  RUNNING
Setting e2e-99597-95a39-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2)
Updating network interface [nic0] of instance [e2e-99597-95a39-master]...
................done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-09/zones/us-east1-b/instances/e2e-99597-95a39-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on e2e-99597-95a39-master despite 5 attempts
Last attempt failed with: load pubkey "/workspace/.ssh/google_compute_engine": invalid format

/bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-09/global/firewalls/e2e-99597-95a39-minion-all].
NAME                        NETWORK          DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
e2e-99597-95a39-minion-all  e2e-99597-95a39  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
... skipping 22 lines ...
Looking for address 'e2e-99597-95a39-master-ip'
Looking for address 'e2e-99597-95a39-master-internal-ip'
Using master: e2e-99597-95a39-master (external IP: 34.74.192.210; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-09_e2e-99597-95a39" set.
User "k8s-infra-e2e-boskos-scale-09_e2e-99597-95a39" set.
Context "k8s-infra-e2e-boskos-scale-09_e2e-99597-95a39" created.
Switched to context "k8s-infra-e2e-boskos-scale-09_e2e-99597-95a39".
... skipping 228 lines ...
e2e-99597-95a39-minion-group-xzb9   Ready                         <none>   50s   v1.22.0-alpha.2.317+3cd151f993f033
e2e-99597-95a39-minion-group-z6np   Ready                         <none>   50s   v1.22.0-alpha.2.317+3cd151f993f033
e2e-99597-95a39-minion-group-z8kp   Ready                         <none>   53s   v1.22.0-alpha.2.317+3cd151f993f033
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.74.192.210
GLBCDefaultBackend is running at https://34.74.192.210/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.74.192.210/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.74.192.210/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 8060 lines ...
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1399957236930842624' using logexporter
Detecting nodes in the cluster
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1399957236930842624/logexported-nodes-registry) for successful nodes...
... skipping 4 lines ...