This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-09-14 01:40
Elapsed6m57s
Revisionmaster
job-versionv1.26.0-alpha.0.527+ea4c28c7f86372
kubetest-versionv20220908-70b61d242b
revisionv1.26.0-alpha.0.527+ea4c28c7f86372

Test Failures


kubetest Up 1m48s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 346 lines ...
2022/09/14 01:43:32 [INFO] signed certificate with serial number 375881213111364456035005197223587722178498712509
2022/09/14 01:43:32 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - Setting minimum CPU platform is not supported for the selected machine type e2-standard-2.
Failed to create master instance due to non-retryable error
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-scalability-4/global/firewalls/kubemark-100-scheduler-highqps-minion-all].
NAME                                       NETWORK                         DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-100-scheduler-highqps-minion-all  kubemark-100-scheduler-highqps  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Failed to create firewall rule.
2022/09/14 01:43:37 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 1m48.70420765s
2022/09/14 01:43:37 e2e.go:571: Dumping logs from nodes to GCS directly at path: gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1569863115179298816
2022/09/14 01:43:37 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1569863115179298816
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: k8s-jenkins-scalability-4
... skipping 3 lines ...
Dumping logs from master locally to '/tmp/tmp.37ZcpN9uFA/logs'
Trying to find master named 'kubemark-100-scheduler-highqps-master'
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 34.139.225.143; internal IP: 10.40.0.2)
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-scheduler-highqps-master
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-4/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1569863115179298816' using logexporter
No nodes found!
Detecting nodes in the cluster
WARNING: The following filter keys were not present in any resource : name, zone
... skipping 70 lines ...
W0914 01:47:10.426209    8273 loader.go:222] Config not found: /workspace/.kube/config
Property "contexts.k8s-jenkins-scalability-4_kubemark-100-scheduler-highqps" unset.
Cleared config for k8s-jenkins-scalability-4_kubemark-100-scheduler-highqps from /workspace/.kube/config
Done
2022/09/14 01:47:10 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 1m39.076820201s
2022/09/14 01:47:10 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/09/14 01:47:10 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...