This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-08-06 12:01
Elapsed3m51s
Revision
job-versionv1.23.0-alpha.0.293+8c64743d73f206
kubetest-version
revisionv1.23.0-alpha.0.293+8c64743d73f206

Test Failures


kubetest Up 15s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ WRAPPED_COMMAND_PID=30
+ wait 30
+ /workspace/scenarios/kubernetes_e2e.py --cluster=gce-scale-cluster --env=CONCURRENT_SERVICE_SYNCS=5 --env=HEAPSTER_MACHINE_TYPE=e2-standard-32 --extract=ci/latest-fast --extract-ci-bucket=k8s-release-dev '--env=CONTROLLER_MANAGER_TEST_ARGS=--profiling --kube-api-qps=100 --kube-api-burst=100 --endpointslice-updates-batch-period=500ms --endpoint-updates-batch-period=500ms' --gcp-master-image=gci --gcp-node-image=gci --gcp-node-size=e2-small --gcp-nodes=5000 --gcp-project=kubernetes-scale --gcp-ssh-proxy-instance-name=gce-scale-cluster-master --gcp-zone=us-east1-b --ginkgo-parallel=40 --provider=gce '--test_args=--ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[DisabledForLargeClusters\] --minStartupPods=8 --node-schedulable-timeout=90m' --timeout=240m --use-logexporter --logexporter-gcs-path=gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-correctness/1423615140175024128
starts with local mode
Environment:
API_SERVER_TEST_LOG_LEVEL=--v=3
... skipping 189 lines ...
k8s-fw-a0c51490604484e39b44e4be80284fb5
k8s-fw-a6b9c8bdd332e4924be9ed0b7c64a662
k8s-fw-a9ab1debe873946b6981dfe1be766d04
k8s-fw-a9b0a578b4ab64132aacc0c2d4d1b1af
k8s-fw-aeff8504a9d464d349abc73863398723
Deleting custom subnet...
ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/regions/us-east1/subnetworks/gce-scale-cluster-custom-subnet' was not found

ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
 - The network resource 'projects/kubernetes-scale/global/networks/gce-scale-cluster' is already being used by 'projects/kubernetes-scale/global/firewalls/k8s-d74235cd2876a5ee-node-http-hc'

Failed to delete network 'gce-scale-cluster'. Listing firewall-rules:
NAME                                     NETWORK            DIRECTION  PRIORITY  ALLOW      DENY  DISABLED
k8s-03b5ddb1bedf037e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-890f5c3003636ba3-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-bee3ddd48f54bf54-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-d74235cd2876a5ee-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-ec37903be58ba35e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
... skipping 32 lines ...
... calling verify-release-tars
... calling kube-up
Project: kubernetes-scale
Network Project: kubernetes-scale
Zone: us-east1-b
+++ Staging tars to Google Storage: gs://kubernetes-staging-a5dbc9bafa/gce-scale-cluster-devel
ResumableUploadException: 503 Server Error
CommandException: 1 file/object could not be transferred.
2021/08/06 12:02:41 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 15.943500322s
2021/08/06 12:02:41 e2e.go:541: Dumping logs from nodes to GCS directly at path: gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-correctness/1423615140175024128
2021/08/06 12:02:41 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-correctness/1423615140175024128
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: kubernetes-scale
Network Project: kubernetes-scale
Zone: us-east1-b
Dumping logs temporarily to '/tmp/tmp.oqT6o7X2sF/logs'. Will upload to 'gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-correctness/1423615140175024128' later.
Dumping logs from master locally to '/tmp/tmp.oqT6o7X2sF/logs'
Trying to find master named 'gce-scale-cluster-master'
Looking for address 'gce-scale-cluster-master-ip'
ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/regions/us-east1/addresses/gce-scale-cluster-master-ip' was not found

Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
Master not detected. Is the cluster up?
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-correctness/1423615140175024128' using logexporter
Detecting nodes in the cluster
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Failed to create logexporter daemonset.. falling back to logdump through SSH
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Dumping logs for nodes provided as args to dump_nodes() function
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov cl2-* startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gce-scale-cluster-minion-heapster
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/zones/us-east1-b/instances/gce-scale-cluster-minion-heapster' was not found

Uploading '/tmp/tmp.oqT6o7X2sF/logs' to 'gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-correctness/1423615140175024128'
CommandException: One or more URLs matched no objects.
Copying file:///tmp/tmp.oqT6o7X2sF/logs/gce-scale-cluster-minion-heapster/serial-1.log [Content-Type=application/octet-stream]...
/ [0/1 files][    0.0 B/   32.0 B]                                              
/ [1/1 files][   32.0 B/   32.0 B]                                              
... skipping 32 lines ...
k8s-fw-a0c51490604484e39b44e4be80284fb5
k8s-fw-a6b9c8bdd332e4924be9ed0b7c64a662
k8s-fw-a9ab1debe873946b6981dfe1be766d04
k8s-fw-a9b0a578b4ab64132aacc0c2d4d1b1af
k8s-fw-aeff8504a9d464d349abc73863398723
Deleting custom subnet...
ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/regions/us-east1/subnetworks/gce-scale-cluster-custom-subnet' was not found

ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
 - The network resource 'projects/kubernetes-scale/global/networks/gce-scale-cluster' is already being used by 'projects/kubernetes-scale/global/firewalls/k8s-d74235cd2876a5ee-node-http-hc'

Failed to delete network 'gce-scale-cluster'. Listing firewall-rules:
NAME                                     NETWORK            DIRECTION  PRIORITY  ALLOW      DENY  DISABLED
k8s-03b5ddb1bedf037e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-890f5c3003636ba3-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-bee3ddd48f54bf54-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-d74235cd2876a5ee-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-ec37903be58ba35e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
... skipping 20 lines ...
W0806 12:05:05.296705    6859 loader.go:221] Config not found: /workspace/.kube/config
Property "contexts.kubernetes-scale_gce-scale-cluster" unset.
Cleared config for kubernetes-scale_gce-scale-cluster from /workspace/.kube/config
Done
2021/08/06 12:05:05 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 40.570138705s
2021/08/06 12:05:05 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/08/06 12:05:05 main.go:327: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...