This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsugangli: Firewall Pinhole Fix for ILB and NetLB
ResultABORTED
Tests 0 failed / 80 succeeded
Started2022-06-23 17:29
Elapsed1h1m
Revisione2dc0bdb83e6bc59156792083dcf4b247de71b03
Refs 109510
job-versionv1.25.0-alpha.1.111+5c01635d94c931
kubetest-version
revisionv1.25.0-alpha.1.111+5c01635d94c931

No Test Failures!


Show 80 Passed Tests

Error lines from build-log.txt

... skipping 673 lines ...
NAME                     ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
e2e-109510-95a39-master  us-east1-b  n1-standard-8               10.40.0.3    34.74.208.66  RUNNING
Setting e2e-109510-95a39-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2)
Updating network interface [nic0] of instance [e2e-109510-95a39-master]...
..........done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-01/zones/us-east1-b/instances/e2e-109510-95a39-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on e2e-109510-95a39-master despite 5 attempts
Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-01/global/firewalls/e2e-109510-95a39-minion-all].
NAME                         NETWORK           DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
e2e-109510-95a39-minion-all  e2e-109510-95a39  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Creating nodes.
... skipping 32 lines ...
Looking for address 'e2e-109510-95a39-master-ip'
Looking for address 'e2e-109510-95a39-master-internal-ip'
Using master: e2e-109510-95a39-master (external IP: 34.74.208.66; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-01_e2e-109510-95a39" set.
User "k8s-infra-e2e-boskos-scale-01_e2e-109510-95a39" set.
Context "k8s-infra-e2e-boskos-scale-01_e2e-109510-95a39" created.
Switched to context "k8s-infra-e2e-boskos-scale-01_e2e-109510-95a39".
... skipping 228 lines ...
e2e-109510-95a39-minion-group-z663   Ready                         <none>   57s   v1.25.0-alpha.1.111+5c01635d94c931
e2e-109510-95a39-minion-group-zdwm   Ready                         <none>   57s   v1.25.0-alpha.1.111+5c01635d94c931
e2e-109510-95a39-minion-heapster     Ready                         <none>   70s   v1.25.0-alpha.1.111+5c01635d94c931
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.74.208.66
GLBCDefaultBackend is running at https://34.74.208.66/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.74.208.66/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.74.208.66/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 3022 lines ...
Specify --start=60668 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1540023846181015552' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1540023846181015552/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 391 lines ...