This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-05-04 12:02
Elapsed7m7s
Revision
job-versionv1.25.0-alpha.0.195+094a33ad801065
kubetest-version
revisionv1.25.0-alpha.0.195+094a33ad801065

Test Failures


kubetest Up 1m21s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ WRAPPED_COMMAND_PID=31
+ wait 31
+ /workspace/scenarios/kubernetes_e2e.py --cluster=gce-scale-cluster --env=CONCURRENT_SERVICE_SYNCS=5 --env=HEAPSTER_MACHINE_TYPE=e2-standard-32 --extract=ci/latest-fast --extract-ci-bucket=k8s-release-dev '--env=CONTROLLER_MANAGER_TEST_ARGS=--profiling --kube-api-qps=100 --kube-api-burst=100 --endpointslice-updates-batch-period=500ms --endpoint-updates-batch-period=500ms' --gcp-master-image=gci --gcp-node-image=gci --gcp-node-size=e2-small --gcp-nodes=5000 --gcp-project-type=scalability-scale-project --gcp-ssh-proxy-instance-name=gce-scale-cluster-master --gcp-zone=us-east1-b --ginkgo-parallel=40 --provider=gce '--test_args=--ginkgo.skip=\[Driver:.gcepd\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[DisabledForLargeClusters\] --minStartupPods=8 --node-schedulable-timeout=90m' --timeout=240m --use-logexporter --logexporter-gcs-path=gs://k8s-infra-scalability-tests-logs/ci-kubernetes-e2e-gce-scale-correctness/1521822220547002368
starts with local mode
Environment:
API_SERVER_TEST_LOG_LEVEL=--v=3
... skipping 342 lines ...
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
WARNING: Some requests generated warnings:
 - Disk size: '500 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
 - The resource 'projects/cos-cloud/global/images/cos-85-13310-1308-1' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-85-13310-1308-6'.

ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - The zone 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b' does not have enough resources available to fulfill the request.  '(resource type:compute)'.
Failed to create master instance due to non-retryable error
2022/05/04 12:04:25 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 1m21.479640021s
2022/05/04 12:04:25 e2e.go:571: Dumping logs from nodes to GCS directly at path: gs://k8s-infra-scalability-tests-logs/ci-kubernetes-e2e-gce-scale-correctness/1521822220547002368
2022/05/04 12:04:25 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://k8s-infra-scalability-tests-logs/ci-kubernetes-e2e-gce-scale-correctness/1521822220547002368
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: k8s-infra-e2e-scale-5k-project
... skipping 3 lines ...
Dumping logs from master locally to '/tmp/tmp.7t2u4VIevR/logs'
Trying to find master named 'gce-scale-cluster-master'
Looking for address 'gce-scale-cluster-master-ip'
Looking for address 'gce-scale-cluster-master-internal-ip'
Using master: gce-scale-cluster-master (external IP: 34.75.174.135; internal IP: 10.40.0.2)
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov cl2-* startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gce-scale-cluster-master
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-master' was not found

Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-e2e-gce-scale-correctness/1521822220547002368' using logexporter
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Failed to create logexporter daemonset.. falling back to logdump through SSH
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Dumping logs for nodes provided as args to dump_nodes() function
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov cl2-* startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gce-scale-cluster-minion-heapster
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-scale-5k-project/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

Detecting nodes in the cluster
WARNING: The following filter keys were not present in any resource : name, zone
WARNING: The following filter keys were not present in any resource : name, zone
INSTANCE_GROUPS=
... skipping 70 lines ...
W0504 12:09:16.731435    8696 loader.go:221] Config not found: /workspace/.kube/config
Property "contexts.k8s-infra-e2e-scale-5k-project_gce-scale-cluster" unset.
Cleared config for k8s-infra-e2e-scale-5k-project_gce-scale-cluster from /workspace/.kube/config
Done
2022/05/04 12:09:16 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 1m33.161866461s
2022/05/04 12:09:16 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/05/04 12:09:16 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...