This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2021-07-17 17:04
Elapsed1h46m
Revisionmaster
job-versionv1.22.0-beta.2.39+33aba7ee025dfd
kubetest-version
revisionv1.22.0-beta.2.39+33aba7ee025dfd

Test Failures


kubetest ClusterLoaderV2 16m9s

error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gce-scale-performance-1416442963419992064 --nodes=5000 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_gce_container_restarts.yaml --testoverrides=./testing/overrides/5000_nodes.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 10 Passed Tests

Error lines from build-log.txt

... skipping 200 lines ...
k8s-fw-a0c51490604484e39b44e4be80284fb5
k8s-fw-a6b9c8bdd332e4924be9ed0b7c64a662
k8s-fw-a9ab1debe873946b6981dfe1be766d04
k8s-fw-a9b0a578b4ab64132aacc0c2d4d1b1af
k8s-fw-aeff8504a9d464d349abc73863398723
Deleting custom subnet...
ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
 - The resource 'projects/kubernetes-scale/regions/us-east1/subnetworks/gce-scale-cluster-custom-subnet' was not found

ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
 - The network resource 'projects/kubernetes-scale/global/networks/gce-scale-cluster' is already being used by 'projects/kubernetes-scale/global/firewalls/k8s-fw-a6b9c8bdd332e4924be9ed0b7c64a662'

Failed to delete network 'gce-scale-cluster'. Listing firewall-rules:
NAME                                     NETWORK            DIRECTION  PRIORITY  ALLOW      DENY  DISABLED
k8s-03b5ddb1bedf037e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-890f5c3003636ba3-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-bee3ddd48f54bf54-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-d74235cd2876a5ee-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-ec37903be58ba35e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
... skipping 336 lines ...
Looking for address 'gce-scale-cluster-master-ip'
Looking for address 'gce-scale-cluster-master-internal-ip'
Using master: gce-scale-cluster-master (external IP: 34.74.78.205; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "kubernetes-scale_gce-scale-cluster" set.
User "kubernetes-scale_gce-scale-cluster" set.
Context "kubernetes-scale_gce-scale-cluster" created.
Switched to context "kubernetes-scale_gce-scale-cluster".
... skipping 10023 lines ...
gce-scale-cluster-minion-group-zxr4     Ready                         <none>   4m40s   v1.22.0-beta.2.39+33aba7ee025dfd
gce-scale-cluster-minion-group-zzxq     Ready                         <none>   6m3s    v1.22.0-beta.2.39+33aba7ee025dfd
gce-scale-cluster-minion-heapster       Ready                         <none>   7m47s   v1.22.0-beta.2.39+33aba7ee025dfd
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.74.78.205
GLBCDefaultBackend is running at https://34.74.78.205/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.74.78.205/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.74.78.205/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 5332 lines ...
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    }
  ]
}
F0717 17:34:53.924470   21333 clusterloader.go:293] Error while setting up prometheus stack: timed out waiting for the condition
2021/07/17 17:34:53 process.go:155: Step '/home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gce-scale-performance-1416442963419992064 --nodes=5000 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_gce_container_restarts.yaml --testoverrides=./testing/overrides/5000_nodes.yaml' finished in 16m9.132275041s
2021/07/17 17:34:53 e2e.go:541: Dumping logs from nodes to GCS directly at path: gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-performance/1416442963419992064
2021/07/17 17:34:53 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-performance/1416442963419992064
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: kubernetes-scale
... skipping 11 lines ...
Specify --start=109436 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-performance/1416442963419992064' using logexporter
Detecting nodes in the cluster
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-performance/1416442963419992064/logexported-nodes-registry) for successful nodes...
... skipping 10508 lines ...
External IP address was not found; defaulting to using IAP tunneling.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Uploading '/tmp/tmp.n4ZIaiRT5J/logs' to 'gs://sig-scalability-logs/ci-kubernetes-e2e-gce-scale-performance/1416442963419992064'
Copying file:///tmp/tmp.n4ZIaiRT5J/1416442963419992064/gce-scale-cluster-minion-group-49tg/logexporter-kq9q8.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.n4ZIaiRT5J/1416442963419992064/gce-scale-cluster-minion-group-4-jmcz/logexporter-rd8gd.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.n4ZIaiRT5J/1416442963419992064/gce-scale-cluster-minion-group-3-2pt9/logexporter-qhgff.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.n4ZIaiRT5J/1416442963419992064/gce-scale-cluster-minion-group-4-3v1c/logexporter-hvftt.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.n4ZIaiRT5J/1416442963419992064/gce-scale-cluster-minion-group-4-gsv9/logexporter-9rrsg.log [Content-Type=application/octet-stream]...
... skipping 5308 lines ...
k8s-fw-a6b9c8bdd332e4924be9ed0b7c64a662
k8s-fw-a9ab1debe873946b6981dfe1be766d04
k8s-fw-a9b0a578b4ab64132aacc0c2d4d1b1af
k8s-fw-aeff8504a9d464d349abc73863398723
Deleting custom subnet...
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-scale/regions/us-east1/subnetworks/gce-scale-cluster-custom-subnet].
ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
 - The network resource 'projects/kubernetes-scale/global/networks/gce-scale-cluster' is already being used by 'projects/kubernetes-scale/global/firewalls/k8s-fw-a6b9c8bdd332e4924be9ed0b7c64a662'

Failed to delete network 'gce-scale-cluster'. Listing firewall-rules:
NAME                                     NETWORK            DIRECTION  PRIORITY  ALLOW      DENY  DISABLED
k8s-03b5ddb1bedf037e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-890f5c3003636ba3-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-bee3ddd48f54bf54-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-d74235cd2876a5ee-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
k8s-ec37903be58ba35e-node-http-hc        gce-scale-cluster  INGRESS    1000      tcp:10256        False
... skipping 12 lines ...
Property "users.kubernetes-scale_gce-scale-cluster-basic-auth" unset.
Property "contexts.kubernetes-scale_gce-scale-cluster" unset.
Cleared config for kubernetes-scale_gce-scale-cluster from /workspace/.kube/config
Done
2021/07/17 18:50:48 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 29m28.395546174s
2021/07/17 18:50:48 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/07/17 18:50:48 main.go:327: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gce-scale-performance-1416442963419992064 --nodes=5000 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_gce_container_restarts.yaml --testoverrides=./testing/overrides/5000_nodes.yaml: exit status 1]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...