This job view page is being replaced by Spyglass soon. Check out the new job view.
PRDangerOnTheRanger: [WIP] Use correct container runtime when running E2E tests
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-20 23:24
Elapsed17m30s
Revision2455985b478ff7da2dc16cd6e47eed9f6504935c
Refs 253

No Test Failures!


Error lines from build-log.txt

... skipping 270 lines ...
Trying to find master named 'kt2-a3ab2375-e9b1-master'
Looking for address 'kt2-a3ab2375-e9b1-master-ip'
Using master: kt2-a3ab2375-e9b1-master (external IP: 34.69.193.215; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.......................................I0720 23:35:34.818128    2723 boskos.go:86] Sending heartbeat to Boskos
....Cluster failed to initialize within 300 seconds.
Last output from querying API server follows:
-----------------------------------------------------
* Expire in 0 ms for 6 (transfer 0x55f3179d1fb0)
* Expire in 5000 ms for 8 (transfer 0x55f3179d1fb0)
*   Trying 34.69.193.215...
* TCP_NODELAY set
... skipping 36 lines ...
scp: /var/log/kube-addon-manager.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cloud-controller-manager.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/a3ab2375-e9b1-11eb-9c58-ae3de305ddc3/cluster-logs'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-a3ab2375-e9b1-minion-group-98q2
... skipping 7 lines ...
Specify --start=64616 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=kt2-a3ab2375-e9b1-minion-group
NODE_NAMES=kt2-a3ab2375-e9b1-minion-group-98q2 kt2-a3ab2375-e9b1-minion-group-nmr4 kt2-a3ab2375-e9b1-minion-group-qw73
Failures for kt2-a3ab2375-e9b1-minion-group (if any):
I0720 23:37:25.789123    2723 dumplogs.go:121] About to run: [/usr/local/bin/kubectl cluster-info dump]
I0720 23:37:25.789191    2723 local.go:42] ⚙️ /usr/local/bin/kubectl cluster-info dump
W0720 23:37:25.862305   28293 loader.go:221] Config not found: /logs/artifacts/a3ab2375-e9b1-11eb-9c58-ae3de305ddc3/kubetest2-kubeconfig
The connection to the server localhost:8080 was refused - did you specify the right host or port?
W0720 23:37:25.874187    2723 up.go:88] Dumping cluster logs at the end of Up() failed: failed to dump cluster info with kubectl: couldn't use kubectl to dump cluster info: exit status 1
I0720 23:37:25.874243    2723 down.go:29] GCE deployer starting Down()
I0720 23:37:25.874252    2723 common.go:204] checking locally built kubectl ...
I0720 23:37:25.874271    2723 common.go:209] could not find locally built kubectl, checking existence of kubectl in $PATH ...
I0720 23:37:25.874342    2723 down.go:43] About to run script at: /home/prow/go/src/k8s.io/cloud-provider-gcp/cluster/kube-down.sh
I0720 23:37:25.874361    2723 local.go:42] ⚙️ /home/prow/go/src/k8s.io/cloud-provider-gcp/cluster/kube-down.sh 
Bringing down cluster using provider: gce
... skipping 8 lines ...
Bringing down cluster
Deleting Managed Instance Group...
..Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-reboot-1-5/zones/us-central1-b/instanceGroupManagers/kt2-a3ab2375-e9b1-minion-group].
done.
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-reboot-1-5/global/instanceTemplates/kt2-a3ab2375-e9b1-minion-template].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-reboot-1-5/global/instanceTemplates/kt2-a3ab2375-e9b1-windows-node-template].
Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep kt2-a3ab2375-e9b1-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kt2-a3ab2375-e9b1-master despite 5 attempts
Last attempt failed with: 
Removing etcd replica, name: kt2-a3ab2375-e9b1-master, port: 2379, result: 1
Failed to execute 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep kt2-a3ab2375-e9b1-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kt2-a3ab2375-e9b1-master despite 5 attempts
Last attempt failed with: 
Removing etcd replica, name: kt2-a3ab2375-e9b1-master, port: 4002, result: 1
Updated [https://www.googleapis.com/compute/v1/projects/k8s-gce-reboot-1-5/zones/us-central1-b/instances/kt2-a3ab2375-e9b1-master].
I0720 23:40:34.826042    2723 boskos.go:86] Sending heartbeat to Boskos
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-reboot-1-5/zones/us-central1-b/instances/kt2-a3ab2375-e9b1-master].
WARNING: The following filter keys were not present in any resource : name
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-reboot-1-5/global/firewalls/kt2-a3ab2375-e9b1-master-https].
... skipping 20 lines ...
W0720 23:42:13.754689   29256 loader.go:221] Config not found: /logs/artifacts/a3ab2375-e9b1-11eb-9c58-ae3de305ddc3/kubetest2-kubeconfig
Property "contexts.k8s-gce-reboot-1-5_kt2-a3ab2375-e9b1" unset.
Cleared config for k8s-gce-reboot-1-5_kt2-a3ab2375-e9b1 from /logs/artifacts/a3ab2375-e9b1-11eb-9c58-ae3de305ddc3/kubetest2-kubeconfig
Done
I0720 23:42:13.759161    2723 down.go:53] about to delete nodeport firewall rule
I0720 23:42:13.759252    2723 local.go:42] ⚙️ gcloud compute firewall-rules delete --project k8s-gce-reboot-1-5 kt2-a3ab2375-e9b1-minion-nodeports
ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-gce-reboot-1-5/global/firewalls/kt2-a3ab2375-e9b1-minion-nodeports' was not found

W0720 23:42:14.899228    2723 firewall.go:62] failed to delete nodeports firewall rules: might be deleted already?
I0720 23:42:14.899262    2723 down.go:59] releasing boskos project
I0720 23:42:14.906899    2723 boskos.go:83] Boskos heartbeat func received signal to close
Error: error encountered during /home/prow/go/src/k8s.io/cloud-provider-gcp/cluster/kube-up.sh: exit status 2
+ EXIT_VALUE=1
+ set +o xtrace