This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjprzychodzen: [WIP][cluster] Improve CCM manifests
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2023-02-06 14:06
Elapsed46m33s
Revision3eb83c5a6d68959c6007ea09fbc30a7993db745e
Refs 458
deployer-version
kubetest-versionkubetest2 version

Test Failures


kubetest2 Up 31m7s

error encountered during /home/prow/go/src/k8s.io/cloud-provider-gcp/cluster/kube-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 2 Passed Tests

Error lines from build-log.txt

... skipping 328 lines ...
Trying to find master named 'kt2-5a46df6d-a627-master'
Looking for address 'kt2-5a46df6d-a627-master-ip'
Using master: kt2-5a46df6d-a627-master (external IP: 34.29.50.189; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

............Kubernetes cluster created.
Cluster "kubernetes-gci-petset_kt2-5a46df6d-a627" set.
User "kubernetes-gci-petset_kt2-5a46df6d-a627" set.
Context "kubernetes-gci-petset_kt2-5a46df6d-a627" created.
Switched to context "kubernetes-gci-petset_kt2-5a46df6d-a627".
... skipping 148 lines ...

Specify --start=76903 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/cluster-logs'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-5a46df6d-a627-minion-group-jkjx
... skipping 7 lines ...
Specify --start=115341 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=kt2-5a46df6d-a627-minion-group
NODE_NAMES=kt2-5a46df6d-a627-minion-group-jkjx kt2-5a46df6d-a627-minion-group-vrlh kt2-5a46df6d-a627-minion-group-zrcz
Failures for kt2-5a46df6d-a627-minion-group (if any):
I0206 14:45:14.600082    3223 dumplogs.go:121] About to run: [/usr/local/bin/kubectl cluster-info dump]
I0206 14:45:14.600135    3223 local.go:42] ⚙️ /usr/local/bin/kubectl cluster-info dump
I0206 14:45:15.081352    3223 down.go:29] GCE deployer starting Down()
... skipping 44 lines ...
Property "users.kubernetes-gci-petset_kt2-5a46df6d-a627-basic-auth" unset.
Property "contexts.kubernetes-gci-petset_kt2-5a46df6d-a627" unset.
Cleared config for kubernetes-gci-petset_kt2-5a46df6d-a627 from /root/_rundir/5a46df6d-a627-11ed-852c-6e8d0fa4b599/kubetest2-kubeconfig
Done
I0206 14:52:12.480459    3223 down.go:53] about to delete nodeport firewall rule
I0206 14:52:12.480500    3223 local.go:42] ⚙️ gcloud compute firewall-rules delete --project kubernetes-gci-petset kt2-5a46df6d-a627-minion-nodeports
ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
 - The resource 'projects/kubernetes-gci-petset/global/firewalls/kt2-5a46df6d-a627-minion-nodeports' was not found

W0206 14:52:13.972503    3223 firewall.go:62] failed to delete nodeports firewall rules: might be deleted already?
I0206 14:52:13.972553    3223 down.go:59] releasing boskos project
I0206 14:52:13.979002    3223 boskos.go:83] Boskos heartbeat func received signal to close
Error: error encountered during /home/prow/go/src/k8s.io/cloud-provider-gcp/cluster/kube-up.sh: exit status 1
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...