This job view page is being replaced by Spyglass soon. Check out the new job view.
PRereslibre: kubeadm: Allow certain certs/keys to be missing on the secret
ResultFAILURE
Tests 1 failed / 19 succeeded
Started2019-03-15 22:35
Elapsed47m16s
Revision
Buildergke-prow-containerd-pool-99179761-02c3
Refs master:df2094b3
75415:bc26c69b
pod819a1988-4772-11e9-be52-0a580a6c0982
infra-commit7df5b975d
job-versionv1.15.0-alpha.0.1230+61627c43123641
pod819a1988-4772-11e9-be52-0a580a6c0982
repok8s.io/kubernetes
repo-commit61627c431236417adc9a0bc929ef231d1c9856d7
repos{u'k8s.io/kubernetes': u'master:df2094b3d728bd58c5f23b41add109e32fc7c301,75415:bc26c69b6149379a3800da8447445caf828a9983', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.15.0-alpha.0.1230+61627c43123641

Test Failures


ClusterLoaderV2 55s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Error lines from build-log.txt

... skipping 1164 lines ...
W0315 23:05:21.379] Trying to find master named 'e2e-75415-ac87c-master'
W0315 23:05:21.379] Looking for address 'e2e-75415-ac87c-master-ip'
W0315 23:05:22.042] Using master: e2e-75415-ac87c-master (external IP: 35.243.188.88)
I0315 23:05:22.142] Waiting up to 300 seconds for cluster initialization.
I0315 23:05:22.143] 
I0315 23:05:22.143]   This will continually check to see if the API for kubernetes is reachable.
I0315 23:05:22.143]   This may time out if there was some uncaught error during start up.
I0315 23:05:22.143] 
I0315 23:05:38.309] ...Kubernetes cluster created.
I0315 23:05:38.443] Cluster "k8s-presubmit-scale_e2e-75415-ac87c" set.
I0315 23:05:38.573] User "k8s-presubmit-scale_e2e-75415-ac87c" set.
I0315 23:05:38.704] Context "k8s-presubmit-scale_e2e-75415-ac87c" created.
I0315 23:05:38.833] Switched to context "k8s-presubmit-scale_e2e-75415-ac87c".
... skipping 22 lines ...
I0315 23:06:18.586] e2e-75415-ac87c-minion-group-7vtv   Ready                      <none>   10s   v1.15.0-alpha.0.1230+61627c43123641
I0315 23:06:18.586] e2e-75415-ac87c-minion-group-gmph   Ready                      <none>   20s   v1.15.0-alpha.0.1230+61627c43123641
I0315 23:06:18.587] e2e-75415-ac87c-minion-group-gv0h   Ready                      <none>   20s   v1.15.0-alpha.0.1230+61627c43123641
I0315 23:06:18.587] e2e-75415-ac87c-minion-group-nnbt   Ready                      <none>   14s   v1.15.0-alpha.0.1230+61627c43123641
I0315 23:06:18.587] e2e-75415-ac87c-minion-group-vq4x   Ready                      <none>   17s   v1.15.0-alpha.0.1230+61627c43123641
I0315 23:06:18.890] Validate output:
I0315 23:06:19.171] NAME                 STATUS    MESSAGE             ERROR
I0315 23:06:19.171] scheduler            Healthy   ok                  
I0315 23:06:19.171] etcd-0               Healthy   {"health":"true"}   
I0315 23:06:19.171] controller-manager   Healthy   ok                  
I0315 23:06:19.171] etcd-1               Healthy   {"health":"true"}   
I0315 23:06:19.176] Cluster validation succeeded
W0315 23:06:19.277] Done, listing cluster services:
... skipping 60 lines ...
W0315 23:06:44.771] 2019/03/15 23:06:44 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 4m27.690636272s
W0315 23:06:44.771] 2019/03/15 23:06:44 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
W0315 23:06:45.050] 2019/03/15 23:06:45 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 296.715805ms
W0315 23:06:45.050] 2019/03/15 23:06:45 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false get nodes -oyaml
W0315 23:06:45.434] 2019/03/15 23:06:45 process.go:155: Step './cluster/kubectl.sh --match-server-version=false get nodes -oyaml' finished in 383.938052ms
W0315 23:06:45.434] 2019/03/15 23:06:45 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
W0315 23:06:46.811] ERROR: (gcloud.compute.instances.delete) Could not fetch resource:
W0315 23:06:46.811]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75415-ac87c-kubemark-master' was not found
W0315 23:06:46.811] 
W0315 23:06:47.620] ERROR: (gcloud.compute.disks.delete) Could not fetch resource:
W0315 23:06:47.620]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-75415-ac87c-kubemark-master-pd' was not found
W0315 23:06:47.620] 
W0315 23:06:49.308] ERROR: (gcloud.compute.addresses.delete) Could not fetch resource:
W0315 23:06:49.308]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75415-ac87c-kubemark-master-ip' was not found
W0315 23:06:49.308] 
W0315 23:06:49.953] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0315 23:06:49.953]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-75415-ac87c-kubemark-master-https' was not found
W0315 23:06:49.953] 
W0315 23:06:50.010] 2019/03/15 23:06:50 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 4.576068562s
W0315 23:06:50.010] 2019/03/15 23:06:50 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0315 23:06:50.063] Project: k8s-presubmit-scale
W0315 23:06:50.063] Network Project: k8s-presubmit-scale
W0315 23:06:50.063] Zone: us-east1-b
I0315 23:06:50.441] Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1230+61627c43123641", GitCommit:"61627c431236417adc9a0bc929ef231d1c9856d7", GitTreeState:"clean", BuildDate:"2019-03-15T19:40:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
I0315 23:06:50.442] Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1230+61627c43123641", GitCommit:"61627c431236417adc9a0bc929ef231d1c9856d7", GitTreeState:"clean", BuildDate:"2019-03-15T19:40:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
W0315 23:06:50.542] 2019/03/15 23:06:50 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 436.853844ms
W0315 23:06:50.542] 2019/03/15 23:06:50 process.go:153: Running: ./test/kubemark/start-kubemark.sh
W0315 23:06:51.201] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0315 23:06:51.201]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75415-ac87c-kubemark-master-ip' was not found
W0315 23:06:51.201] 
I0315 23:06:55.896] Created [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75415-ac87c-kubemark-master-ip].
I0315 23:06:56.141] Succeeded to gcloud compute addresses.
I0315 23:06:56.907] Generating certs for alternate-names: IP:104.196.111.150,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:e2e-75415-ac87c-kubemark-master
I0315 23:06:58.728] Generated PKI authentication data for kubemark.
... skipping 659 lines ...
W0315 23:11:49.045] I0315 23:11:49.044782   89214 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-roleSpecificNamespaces.yaml
W0315 23:11:49.168] I0315 23:11:49.168700   89214 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-service.yaml
W0315 23:11:49.214] I0315 23:11:49.214245   89214 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceAccount.yaml
W0315 23:11:49.256] I0315 23:11:49.255716   89214 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0315 23:11:49.297] I0315 23:11:49.297326   89214 prometheus.go:172] Exposing kube-apiserver metrics in kubemark cluster
W0315 23:11:49.497] I0315 23:11:49.497311   89214 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/kubemark/kube-apiserver-endpoints.yaml
W0315 23:11:49.499] F0315 23:11:49.498849   89214 clusterloader.go:203] Error while setting up prometheus stack: unmarshaling error: yaml: line 13: could not find expected ':'
W0315 23:11:49.500] goroutine 1 [running]:
W0315 23:11:49.501] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.stacks(0xc000846300, 0xc00045c000, 0x9c, 0x1b7)
W0315 23:11:49.501] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:830 +0xb1
W0315 23:11:49.501] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).output(0x25b5be0, 0xc000000003, 0xc0003e2000, 0x2528909, 0x10, 0xcb, 0x0)
W0315 23:11:49.501] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:781 +0x25e
W0315 23:11:49.501] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).printf(0x25b5be0, 0x3, 0x16175e4, 0x2b, 0xc000aefd50, 0x1, 0x1)
... skipping 34 lines ...
W0315 23:12:21.512] scp: /var/log/kube-apiserver-audit.log*: No such file or directory
W0315 23:12:21.819] scp: /var/log/glbc.log*: No such file or directory
W0315 23:12:21.819] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 23:12:21.894] scp: /var/log/fluentd.log*: No such file or directory
W0315 23:12:21.895] scp: /var/log/kubelet.cov*: No such file or directory
W0315 23:12:21.895] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:12:21.963] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:12:22.031] 2019/03/15 23:12:22 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 32.396859172s
W0315 23:12:22.032] 2019/03/15 23:12:22 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0315 23:12:22.132] Skipping dumping of node logs
W0315 23:12:23.118] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 23:12:23.193] scp: /var/log/fluentd.log*: No such file or directory
W0315 23:12:23.194] scp: /var/log/kubelet.cov*: No such file or directory
W0315 23:12:23.194] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:12:23.199] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0315 23:12:23.299] Dumping logs from nodes locally to '/workspace/_artifacts'
I0315 23:12:23.299] Detecting nodes in the cluster
I0315 23:12:57.229] Changing logfiles to be world-readable for download
I0315 23:12:57.288] Changing logfiles to be world-readable for download
I0315 23:12:57.470] Changing logfiles to be world-readable for download
I0315 23:12:58.124] Changing logfiles to be world-readable for download
... skipping 40 lines ...
W0315 23:13:04.060] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 23:13:04.060] scp: /var/log/kubelet.cov*: No such file or directory
W0315 23:13:04.485] scp: /var/log/fluentd.log*: No such file or directory
W0315 23:13:04.486] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 23:13:04.486] scp: /var/log/kubelet.cov*: No such file or directory
W0315 23:13:17.567] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:17.571] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:18.425] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:18.429] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:19.190] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:19.195] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:20.004] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:20.008] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:20.056] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:20.061] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:20.294] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:20.299] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:20.808] scp: /var/log/startupscript.log*: No such file or directory
W0315 23:13:20.812] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 23:13:23.973] INSTANCE_GROUPS=e2e-75415-ac87c-minion-group
W0315 23:13:23.973] NODE_NAMES=e2e-75415-ac87c-minion-group-0bpc e2e-75415-ac87c-minion-group-5982 e2e-75415-ac87c-minion-group-7vtv e2e-75415-ac87c-minion-group-gmph e2e-75415-ac87c-minion-group-gv0h e2e-75415-ac87c-minion-group-nnbt e2e-75415-ac87c-minion-group-vq4x
I0315 23:13:24.810] Failures for e2e-75415-ac87c-minion-group
W0315 23:13:26.048] 2019/03/15 23:13:26 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m36.413575804s
W0315 23:13:26.048] 2019/03/15 23:13:26 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0315 23:13:26.098] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75415-ac87c-kubemark-master].
... skipping 19 lines ...
I0315 23:13:52.687] Bringing down cluster
W0315 23:13:54.808] Deleting Managed Instance Group...
W0315 23:17:06.029] ........................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-75415-ac87c-minion-group].
W0315 23:17:06.030] done.
W0315 23:17:11.312] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75415-ac87c-minion-template].
W0315 23:17:17.557] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75415-ac87c-windows-node-template].
I0315 23:17:35.030] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75415-ac87c-master, port: 2379, result: 0
I0315 23:17:36.454] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75415-ac87c-master, port: 4002, result: 0
W0315 23:17:42.913] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75415-ac87c-master].
W0315 23:20:02.625] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75415-ac87c-master].
W0315 23:20:24.402] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75415-ac87c-master-https].
W0315 23:20:30.991] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75415-ac87c-minion-all].
W0315 23:20:35.241] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75415-ac87c-master-etcd].
W0315 23:20:53.335] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75415-ac87c-master-ip].
... skipping 9 lines ...
I0315 23:22:26.640] Property "users.k8s-presubmit-scale_e2e-75415-ac87c-basic-auth" unset.
I0315 23:22:26.769] Property "contexts.k8s-presubmit-scale_e2e-75415-ac87c" unset.
I0315 23:22:26.774] Cleared config for k8s-presubmit-scale_e2e-75415-ac87c from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0315 23:22:26.774] Done
W0315 23:22:26.874] 2019/03/15 23:22:26 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 9m0.728401917s
W0315 23:22:26.875] 2019/03/15 23:22:26 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0315 23:22:26.875] 2019/03/15 23:22:26 main.go:307: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255]
W0315 23:22:26.875] Traceback (most recent call last):
W0315 23:22:26.875]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0315 23:22:26.884]     main(parse_args())
W0315 23:22:26.884]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0315 23:22:26.885]     mode.start(runner_args)
W0315 23:22:26.885]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0315 23:22:26.885]     check_env(env, self.command, *args)
W0315 23:22:26.885]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0315 23:22:26.885]     subprocess.check_call(cmd, env=env)
W0315 23:22:26.885]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 23:22:26.904]     raise CalledProcessError(retcode, cmd)
W0315 23:22:26.905] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-75415-ac87c', '--gcp-network=e2e-75415-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0315 23:22:26.920] Command failed
I0315 23:22:26.920] process 734 exited with code 1 after 44.9m
E0315 23:22:26.921] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0315 23:22:26.921] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 23:22:27.448] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 23:22:27.539] process 93420 exited with code 0 after 0.0m
I0315 23:22:27.539] Call:  gcloud config get-value account
I0315 23:22:27.871] process 93432 exited with code 0 after 0.0m
I0315 23:22:27.871] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 23:22:27.871] Upload result and artifacts...
I0315 23:22:27.871] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75415/pull-kubernetes-kubemark-e2e-gce-big/41508
I0315 23:22:27.872] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75415/pull-kubernetes-kubemark-e2e-gce-big/41508/artifacts
W0315 23:22:28.850] CommandException: One or more URLs matched no objects.
E0315 23:22:28.970] Command failed
I0315 23:22:28.970] process 93444 exited with code 1 after 0.0m
W0315 23:22:28.970] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75415/pull-kubernetes-kubemark-e2e-gce-big/41508/artifacts not exist yet
I0315 23:22:28.971] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75415/pull-kubernetes-kubemark-e2e-gce-big/41508/artifacts
I0315 23:22:42.977] process 93586 exited with code 0 after 0.2m
I0315 23:22:42.978] Call:  git rev-parse HEAD
I0315 23:22:42.982] process 95788 exited with code 0 after 0.0m
... skipping 21 lines ...