This job view page is being replaced by Spyglass soon. Check out the new job view.
PRgjkim42: kubelet: check taint/toleration before accepting pods
ResultABORTED
Tests 0 failed / 41 succeeded
Started2021-06-06 13:45
Elapsed50m50s
Revision71bf3ba2e4a6691235a896beebc90974eb9aac43
Refs 101218
job-versionv1.22.0-alpha.2.454+9b6be0bf4b5b2c
kubetest-version
revisionv1.22.0-alpha.2.454+9b6be0bf4b5b2c

No Test Failures!


Show 41 Passed Tests

Error lines from build-log.txt

... skipping 675 lines ...
Looking for address 'e2e-101218-95a39-master-ip'
Looking for address 'e2e-101218-95a39-master-internal-ip'
Using master: e2e-101218-95a39-master (external IP: 35.237.12.50; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-23_e2e-101218-95a39" set.
User "k8s-infra-e2e-boskos-scale-23_e2e-101218-95a39" set.
Context "k8s-infra-e2e-boskos-scale-23_e2e-101218-95a39" created.
Switched to context "k8s-infra-e2e-boskos-scale-23_e2e-101218-95a39".
... skipping 227 lines ...
e2e-101218-95a39-minion-group-zx2f   Ready                         <none>   52s    v1.22.0-alpha.2.454+9b6be0bf4b5b2c
e2e-101218-95a39-minion-group-zz2z   Ready                         <none>   52s    v1.22.0-alpha.2.454+9b6be0bf4b5b2c
e2e-101218-95a39-minion-group-zz8b   Ready                         <none>   46s    v1.22.0-alpha.2.454+9b6be0bf4b5b2c
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://35.237.12.50
GLBCDefaultBackend is running at https://35.237.12.50/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://35.237.12.50/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.237.12.50/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 7994 lines ...
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1401535230094872576' using logexporter
Detecting nodes in the cluster
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1401535230094872576/logexported-nodes-registry) for successful nodes...
... skipping 367 lines ...