This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-09-27 01:49
Elapsed8m14s
Revisionmaster
job-versionv1.26.0-alpha.1.129+24377fa7a1b5a2
kubetest-versionv20220922-dcf27e1579
revisionv1.26.0-alpha.1.129+24377fa7a1b5a2

Test Failures


kubetest Up 1m50s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 346 lines ...
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-gci-scalability/global/firewalls/kubemark-100-scheduler-highqps-master-etcd].
NAME                                        NETWORK                         DIRECTION  PRIORITY  ALLOW              DENY  DISABLED
kubemark-100-scheduler-highqps-master-etcd  kubemark-100-scheduler-highqps  INGRESS    1000      tcp:2380,tcp:2381        False
done.
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - Setting minimum CPU platform is not supported for the selected machine type e2-standard-2.
Failed to create master instance due to non-retryable error
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-gci-scalability/global/firewalls/kubemark-100-scheduler-highqps-minion-all].
NAME                                       NETWORK                         DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-100-scheduler-highqps-minion-all  kubemark-100-scheduler-highqps  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Failed to create firewall rule.
2022/09/27 01:53:03 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 1m50.068470492s
2022/09/27 01:53:03 e2e.go:565: Dumping logs from nodes to GCS directly at path: gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1574576439825534976
2022/09/27 01:53:03 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1574576439825534976
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: k8s-jenkins-gci-scalability
... skipping 3 lines ...
Dumping logs from master locally to '/tmp/tmp.ot1K5Z17CI/logs'
Trying to find master named 'kubemark-100-scheduler-highqps-master'
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 35.185.26.252; internal IP: 10.40.0.2)
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-scheduler-highqps-master
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-scalability/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1574576439825534976' using logexporter
No nodes found!
Detecting nodes in the cluster
WARNING: The following filter keys were not present in any resource : name, zone
... skipping 70 lines ...
W0927 01:57:37.025721    8288 loader.go:222] Config not found: /workspace/.kube/config
Property "contexts.k8s-jenkins-gci-scalability_kubemark-100-scheduler-highqps" unset.
Cleared config for k8s-jenkins-gci-scalability_kubemark-100-scheduler-highqps from /workspace/.kube/config
Done
2022/09/27 01:57:37 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 2m16.629111899s
2022/09/27 01:57:37 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/09/27 01:57:37 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...