This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2023-03-02 17:03
Elapsed4m4s
Revisionmaster
job-versionv1.27.0-alpha.2.454+b6d102d634d357
kubetest-versionv20230222-b5208facd4
revisionv1.27.0-alpha.2.454+b6d102d634d357

Test Failures


kubetest Up 14s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 259 lines ...
Network Project: k8s-infra-e2e-scale-5k-project
Zone: us-east1-b
Dumping logs temporarily to '/tmp/tmp.xMmNBRWcYf/logs'. Will upload to 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-e2e-gce-scale-performance/1631338991087259648' later.
Dumping logs from master locally to '/tmp/tmp.xMmNBRWcYf/logs'
Trying to find master named 'gce-scale-cluster-master'
Looking for address 'gce-scale-cluster-master-ip'
ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/regions/us-east1/addresses/gce-scale-cluster-master-ip' was not found

Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
Master not detected. Is the cluster up?
Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-e2e-gce-scale-performance/1631338991087259648' using logexporter
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Failed to create logexporter daemonset.. falling back to logdump through SSH
E0302 17:05:15.347214    3566 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0302 17:05:15.347908    3566 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0302 17:05:15.349563    3566 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0302 17:05:15.351133    3566 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0302 17:05:15.352741    3566 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Dumping logs for nodes provided as args to dump_nodes() function
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov cl2-* startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gce-scale-cluster-minion-heapster
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

Detecting nodes in the cluster
WARNING: The following filter keys were not present in any resource : name, zone
WARNING: The following filter keys were not present in any resource : name, zone
INSTANCE_GROUPS=
... skipping 61 lines ...
W0302 17:07:45.383434    5239 loader.go:222] Config not found: /workspace/.kube/config
Property "contexts.k8s-infra-e2e-scale-5k-project_gce-scale-cluster" unset.
Cleared config for k8s-infra-e2e-scale-5k-project_gce-scale-cluster from /workspace/.kube/config
Done
2023/03/02 17:07:45 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 39.547764454s
2023/03/02 17:07:45 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2023/03/02 17:07:45 main.go:328: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...