Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 6m7s |
Revision | master |
job-version | v1.28.0-alpha.0.1210+3d27dee047a875 |
kubetest-version | v20230513-7e1db2f1bb |
revision | v1.28.0-alpha.0.1210+3d27dee047a875 |
error during ./hack/e2e-internal/e2e-up.sh: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Deferred TearDown
kubetest DumpClusterLogs (--up failed)
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest TearDown Previous
kubetest Timeout
... skipping 271 lines ... Creating firewall... ..Created [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-30/global/firewalls/kubemark-500-default-internal-master]. done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubemark-500-default-internal-master kubemark-500 INGRESS 1000 tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp False ..Created [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-30/global/firewalls/kubemark-500-default-internal-node]. .failed. ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource: - The service is currently unavailable. done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubemark-500-default-internal-node kubemark-500 INGRESS 1000 tcp:1-65535,udp:1-65535,icmp False [0;31mFailed to create firewall rules.[0m 2023/05/25 22:58:22 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 1m13.217293231s 2023/05/25 22:58:22 e2e.go:569: Dumping logs from nodes to GCS directly at path: gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-500-gce/1661867763532042240 2023/05/25 22:58:22 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-500-gce/1661867763532042240 Checking for custom logdump instances, if any Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER Project: k8s-infra-e2e-boskos-scale-30 Network Project: k8s-infra-e2e-boskos-scale-30 Zone: us-central1-f Dumping logs temporarily to '/tmp/tmp.sfRHAJQhqM/logs'. Will upload to 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-500-gce/1661867763532042240' later. Dumping logs from master locally to '/tmp/tmp.sfRHAJQhqM/logs' Trying to find master named 'kubemark-500-master' Looking for address 'kubemark-500-master-ip' ERROR: (gcloud.compute.addresses.describe) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-30/regions/us-central1/addresses/kubemark-500-master-ip' was not found Could not detect Kubernetes master node. Make sure you've launched a cluster with 'kube-up.sh' Master not detected. Is the cluster up? Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-500-gce/1661867763532042240' using logexporter No nodes found! ... skipping 42 lines ... WARNING: The following filter keys were not present in any resource : name, zone Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-30/global/firewalls/kubemark-500-default-internal-master]. Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-30/global/firewalls/kubemark-500-default-internal-node]. Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-30/global/firewalls/kubemark-500-default-ssh]. No firewall rules in network kubemark-500 Deleting custom subnet... ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-30/regions/us-central1/subnetworks/kubemark-500-custom-subnet' was not found Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-30/global/networks/kubemark-500]. Using image: cos-97-16919-294-23 from project: cos-cloud as master image Using image: cos-97-16919-294-23 from project: cos-cloud as node image Using image: cos-97-16919-294-23 from project: cos-cloud as master image ... skipping 9 lines ... Using image: cos-97-16919-294-23 from project: cos-cloud as node image Property "contexts.k8s-infra-e2e-boskos-scale-30_kubemark-500" unset. Cleared config for k8s-infra-e2e-boskos-scale-30_kubemark-500 from /workspace/.kube/config Done 2023/05/25 23:00:31 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 1m56.063958445s 2023/05/25 23:00:31 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/05/25 23:00:31 main.go:328: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1 Traceback (most recent call last): File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module> main(parse_args()) File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main mode.start(runner_args) File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start ... skipping 15 lines ...