This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtsmetana: Kubelet: Fix volumemanager test race
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-02-03 13:15
Elapsed12m20s
Revision
Buildergke-prow-containerd-pool-99179761-vwcc
Refs master:cdfb9126
73404:45464f03
podad0907e8-27b5-11e9-abec-0a580a6c013e
infra-commit40269330c
job-versionv1.14.0-alpha.2.232+71ca67581e3766
podad0907e8-27b5-11e9-abec-0a580a6c013e
repok8s.io/kubernetes
repo-commit71ca67581e3766e94c119482b7208d75bee2c9c8
repos{u'k8s.io/kubernetes': u'master:cdfb9126d334eea722e34f3a895904bb152d53f0,73404:45464f03494bf5c7efb09e44cc07e7125c748cb3', u'k8s.io/release': u'master'}
revisionv1.14.0-alpha.2.232+71ca67581e3766

Test Failures


Up 0.53s

kops configuration failed: error during /workspace/kops create cluster --name e2e-121485-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 104.154.40.230/32 --cloud aws --override cluster.spec.nodePortAccess=104.154.40.230/32: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 950 lines ...
I0203 13:26:40.889] sha1sum(kubernetes-test.tar.gz)=b46bd4355bd4eb23d82d7d840ba5b6433e5e496a
I0203 13:26:40.890] 
I0203 13:26:40.890] Extracting kubernetes-test.tar.gz into /go/src/k8s.io/kubernetes/kubernetes
W0203 13:26:48.150] 2019/02/03 13:26:48 process.go:155: Step '/workspace/get-kube.sh' finished in 12.952065108s
W0203 13:26:48.150] 2019/02/03 13:26:48 process.go:153: Running: /workspace/kops get clusters e2e-121485-dba53.test-cncf-aws.k8s.io
W0203 13:27:08.616] 
W0203 13:27:08.616] error reading cluster configuration "e2e-121485-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121485-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 13:27:08.616] 	status code: 401, request id: 79ba135f-32cd-4bbb-a0de-95762e08bb9c
W0203 13:27:08.622] 2019/02/03 13:27:08 process.go:155: Step '/workspace/kops get clusters e2e-121485-dba53.test-cncf-aws.k8s.io' finished in 20.47211408s
W0203 13:27:08.622] 2019/02/03 13:27:08 process.go:153: Running: /workspace/kops create cluster --name e2e-121485-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 104.154.40.230/32 --cloud aws --override cluster.spec.nodePortAccess=104.154.40.230/32
W0203 13:27:08.750] I0203 13:27:08.749932    4971 create_cluster.go:1448] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
W0203 13:27:09.148] 
W0203 13:27:09.148] error reading cluster configuration "e2e-121485-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121485-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 13:27:09.148] 	status code: 401, request id: e02e5314-8aad-42d7-9363-89b3b6607d65
W0203 13:27:09.153] 2019/02/03 13:27:09 process.go:155: Step '/workspace/kops create cluster --name e2e-121485-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 104.154.40.230/32 --cloud aws --override cluster.spec.nodePortAccess=104.154.40.230/32' finished in 530.810299ms
W0203 13:27:09.178] 2019/02/03 13:27:09 process.go:153: Running: /workspace/kops export kubecfg e2e-121485-dba53.test-cncf-aws.k8s.io
W0203 13:27:09.681] 
W0203 13:27:09.682] error reading cluster configuration: error reading cluster configuration "e2e-121485-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121485-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 13:27:09.682] 	status code: 401, request id: eda81589-402b-4e72-bf68-329999c8223f
W0203 13:27:09.687] 2019/02/03 13:27:09 process.go:155: Step '/workspace/kops export kubecfg e2e-121485-dba53.test-cncf-aws.k8s.io' finished in 508.9665ms
W0203 13:27:09.687] 2019/02/03 13:27:09 process.go:153: Running: /workspace/kops get clusters e2e-121485-dba53.test-cncf-aws.k8s.io
W0203 13:27:10.336] 
W0203 13:27:10.337] error reading cluster configuration "e2e-121485-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121485-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 13:27:10.337] 	status code: 401, request id: 0be5c96b-2cf6-42e1-9956-771379181fee
W0203 13:27:10.341] 2019/02/03 13:27:10 process.go:155: Step '/workspace/kops get clusters e2e-121485-dba53.test-cncf-aws.k8s.io' finished in 654.226394ms
W0203 13:27:10.374] 2019/02/03 13:27:10 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0203 13:27:10.374] 2019/02/03 13:27:10 main.go:297: Something went wrong: starting e2e cluster: kops configuration failed: error during /workspace/kops create cluster --name e2e-121485-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 104.154.40.230/32 --cloud aws --override cluster.spec.nodePortAccess=104.154.40.230/32: exit status 1
W0203 13:27:10.377] Traceback (most recent call last):
W0203 13:27:10.377]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0203 13:27:10.394]     main(parse_args())
W0203 13:27:10.394]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0203 13:27:10.394]     mode.start(runner_args)
W0203 13:27:10.394]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0203 13:27:10.394]     check_env(env, self.command, *args)
W0203 13:27:10.395]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0203 13:27:10.395]     subprocess.check_call(cmd, env=env)
W0203 13:27:10.395]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0203 13:27:10.414]     raise CalledProcessError(retcode, cmd)
W0203 13:27:10.415] subprocess.CalledProcessError: Command '('/workspace/kops-e2e-runner.sh', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws', '--kops-kubernetes-version=https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766', '--up', '--down', '--test', '--provider=aws', '--cluster=e2e-121485-dba53', '--gcp-network=e2e-121485-dba53', '--extract=local', '--ginkgo-parallel', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort', '--timeout=55m', '--kops-cluster=e2e-121485-dba53.test-cncf-aws.k8s.io', '--kops-zones=eu-west-2a', '--kops-state=s3://k8s-kops-prow/', '--kops-nodes=4', '--kops-ssh-key=/workspace/.ssh/kube_aws_rsa', '--kops-ssh-user=admin')' returned non-zero exit status 1
E0203 13:27:10.424] Command failed
I0203 13:27:10.424] process 540 exited with code 1 after 11.0m
E0203 13:27:10.425] FAIL: pull-kubernetes-e2e-kops-aws
I0203 13:27:10.425] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0203 13:27:23.771] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0203 13:27:23.807] process 5013 exited with code 0 after 0.2m
I0203 13:27:23.808] Call:  gcloud config get-value account
I0203 13:27:24.144] process 5025 exited with code 0 after 0.0m
I0203 13:27:24.144] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0203 13:27:24.145] Upload result and artifacts...
I0203 13:27:24.145] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121485
I0203 13:27:24.145] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121485/artifacts
W0203 13:27:25.071] CommandException: One or more URLs matched no objects.
E0203 13:27:25.172] Command failed
I0203 13:27:25.172] process 5037 exited with code 1 after 0.0m
W0203 13:27:25.172] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121485/artifacts not exist yet
I0203 13:27:25.173] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121485/artifacts
I0203 13:27:27.352] process 5179 exited with code 0 after 0.0m
I0203 13:27:27.352] Call:  git rev-parse HEAD
I0203 13:27:27.356] process 5703 exited with code 0 after 0.0m
... skipping 21 lines ...