Recent runs || View in Spyglass
PR | bobbypage: Update pod deletion logic to follow pod gc controller |
Result | ABORTED |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 33m15s |
Revision | 1db155ce3cf6dd4851ead64de0d7e2d3571ce31c |
Refs |
416 |
deployer-version | |
kubetest-version | kubetest2 version |
tester-version |
... skipping 330 lines ... Trying to find master named 'kt2-91c806f0-93b3-master' Looking for address 'kt2-91c806f0-93b3-master-ip' Using master: kt2-91c806f0-93b3-master (external IP: 34.135.141.9; internal IP: (not set)) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. ............Kubernetes cluster created. Cluster "k8s-gce-cvm-1-6-m-ctl-skw-srl_kt2-91c806f0-93b3" set. User "k8s-gce-cvm-1-6-m-ctl-skw-srl_kt2-91c806f0-93b3" set. Context "k8s-gce-cvm-1-6-m-ctl-skw-srl_kt2-91c806f0-93b3" created. Switched to context "k8s-gce-cvm-1-6-m-ctl-skw-srl_kt2-91c806f0-93b3". ... skipping 21 lines ... kt2-91c806f0-93b3-minion-group-6rw4 Ready <none> 7s v1.26.0 kt2-91c806f0-93b3-minion-group-qzn2 Ready <none> 8s v1.26.0 kt2-91c806f0-93b3-minion-group-rcf5 Ready <none> 15s v1.26.0 Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} controller-manager Healthy ok scheduler Healthy ok [0;32mCluster validation succeeded[0m Done, listing cluster services: ... skipping 39 lines ... Specify --start=76436 in the next get-serial-port-output invocation to get only the new output starting from here. scp: /var/log/cluster-autoscaler.log*: No such file or directory scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Dumping logs from nodes locally to '/logs/artifacts/cluster-logs' Detecting nodes in the cluster I0114 02:47:43.659511 3261 boskos.go:86] Sending heartbeat to Boskos Changing logfiles to be world-readable for download Changing logfiles to be world-readable for download Changing logfiles to be world-readable for download ... skipping 4 lines ... gcloud compute ssh kt2-91c806f0-93b3-minion-group-6rw4 --project=k8s-gce-cvm-1-6-m-ctl-skw-srl --zone=us-central1-b --troubleshoot Or, to investigate an IAP tunneling issue: gcloud compute ssh kt2-91c806f0-93b3-minion-group-6rw4 --project=k8s-gce-cvm-1-6-m-ctl-skw-srl --zone=us-central1-b --troubleshoot --tunnel-through-iap ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-91c806f0-93b3-minion-group-qzn2 Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-91c806f0-93b3-minion-group-rcf5 Specify --start=116846 in the next get-serial-port-output invocation to get only the new output starting from here. Specify --start=117612 in the next get-serial-port-output invocation to get only the new output starting from here. scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/node-problem-detector.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/node-problem-detector.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Recommendation: To check for possible causes of SSH connectivity issues and get recommendations, rerun the ssh command with the --troubleshoot option. gcloud compute ssh kt2-91c806f0-93b3-minion-group-6rw4 --project=k8s-gce-cvm-1-6-m-ctl-skw-srl --zone=us-central1-b --troubleshoot Or, to investigate an IAP tunneling issue: gcloud compute ssh kt2-91c806f0-93b3-minion-group-6rw4 --project=k8s-gce-cvm-1-6-m-ctl-skw-srl --zone=us-central1-b --troubleshoot --tunnel-through-iap ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-91c806f0-93b3-minion-group-6rw4 Specify --start=117952 in the next get-serial-port-output invocation to get only the new output starting from here. ssh_exchange_identification: Connection closed by remote host ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. INSTANCE_GROUPS=kt2-91c806f0-93b3-minion-group NODE_NAMES=kt2-91c806f0-93b3-minion-group-6rw4 kt2-91c806f0-93b3-minion-group-qzn2 kt2-91c806f0-93b3-minion-group-rcf5 Failures for kt2-91c806f0-93b3-minion-group (if any): I0114 02:49:08.609277 3261 dumplogs.go:121] About to run: [/usr/local/bin/kubectl cluster-info dump] I0114 02:49:08.609321 3261 local.go:42] ⚙️ /usr/local/bin/kubectl cluster-info dump I0114 02:49:09.503062 3261 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-ginkgo --test-package-version=v1.26.0 --parallel=30 --test-args=--minStartupPods=8 --ginkgo.flakeAttempts=3 --skip-regex=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] ... skipping 1837 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: emptydir] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 14 02:49:46.960: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 192 lines ... [sig-storage] CSI Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [38;5;243mtest/e2e/storage/csi_volumes.go:40[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 14 02:49:48.669: INFO: Driver "csi-hostpath" does not support topology - skipping ... skipping 567 lines ...