This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtsmetana: Kubelet: Fix volumemanager test race
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-02-03 17:49
Elapsed12m14s
Revision
Buildergke-prow-containerd-pool-99179761-8nmq
Refs master:cdfb9126
73404:45464f03
pod04ed3f7d-27dc-11e9-85c7-0a580a6c013f
infra-commit40269330c
job-versionv1.14.0-alpha.2.232+71ca67581e3766
pod04ed3f7d-27dc-11e9-85c7-0a580a6c013f
repok8s.io/kubernetes
repo-commit71ca67581e3766e94c119482b7208d75bee2c9c8
repos{u'k8s.io/kubernetes': u'master:cdfb9126d334eea722e34f3a895904bb152d53f0,73404:45464f03494bf5c7efb09e44cc07e7125c748cb3', u'k8s.io/release': u'master'}
revisionv1.14.0-alpha.2.232+71ca67581e3766

Test Failures


Up 0.59s

kops configuration failed: error during /workspace/kops create cluster --name e2e-121487-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-south-1a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 35.184.43.102/32 --cloud aws --override cluster.spec.nodePortAccess=35.184.43.102/32: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 965 lines ...
I0203 18:00:59.629] sha1sum(kubernetes-test.tar.gz)=b46bd4355bd4eb23d82d7d840ba5b6433e5e496a
I0203 18:00:59.630] 
I0203 18:00:59.630] Extracting kubernetes-test.tar.gz into /go/src/k8s.io/kubernetes/kubernetes
W0203 18:01:07.296] 2019/02/03 18:01:07 process.go:155: Step '/workspace/get-kube.sh' finished in 13.828027337s
W0203 18:01:07.297] 2019/02/03 18:01:07 process.go:153: Running: /workspace/kops get clusters e2e-121487-dba53.test-cncf-aws.k8s.io
W0203 18:01:42.929] 
W0203 18:01:42.929] error reading cluster configuration "e2e-121487-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121487-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 18:01:42.930] 	status code: 401, request id: ebd65560-d631-4399-a0d1-7a6bdca8111e
W0203 18:01:42.936] 2019/02/03 18:01:42 process.go:155: Step '/workspace/kops get clusters e2e-121487-dba53.test-cncf-aws.k8s.io' finished in 35.639457692s
W0203 18:01:42.937] 2019/02/03 18:01:42 process.go:153: Running: /workspace/kops create cluster --name e2e-121487-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-south-1a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 35.184.43.102/32 --cloud aws --override cluster.spec.nodePortAccess=35.184.43.102/32
W0203 18:01:43.075] I0203 18:01:43.075212    4181 create_cluster.go:1448] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
W0203 18:01:43.525] 
W0203 18:01:43.526] error reading cluster configuration "e2e-121487-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121487-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 18:01:43.526] 	status code: 401, request id: 332da1c9-fdbe-43ec-afb6-c2b0facb32fe
W0203 18:01:43.531] 2019/02/03 18:01:43 process.go:155: Step '/workspace/kops create cluster --name e2e-121487-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-south-1a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 35.184.43.102/32 --cloud aws --override cluster.spec.nodePortAccess=35.184.43.102/32' finished in 594.833213ms
W0203 18:01:43.587] 2019/02/03 18:01:43 process.go:153: Running: /workspace/kops export kubecfg e2e-121487-dba53.test-cncf-aws.k8s.io
W0203 18:01:44.132] 
W0203 18:01:44.133] error reading cluster configuration: error reading cluster configuration "e2e-121487-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121487-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 18:01:44.133] 	status code: 401, request id: 4b6ed1e7-f20e-47fd-926d-0188ae3ad5c7
W0203 18:01:44.138] 2019/02/03 18:01:44 process.go:155: Step '/workspace/kops export kubecfg e2e-121487-dba53.test-cncf-aws.k8s.io' finished in 550.23157ms
W0203 18:01:44.138] 2019/02/03 18:01:44 process.go:153: Running: /workspace/kops get clusters e2e-121487-dba53.test-cncf-aws.k8s.io
W0203 18:01:44.708] 
W0203 18:01:44.708] error reading cluster configuration "e2e-121487-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121487-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 18:01:44.709] 	status code: 401, request id: 5b53e93a-5718-4387-a731-5bb904d0fdb8
W0203 18:01:44.713] 2019/02/03 18:01:44 process.go:155: Step '/workspace/kops get clusters e2e-121487-dba53.test-cncf-aws.k8s.io' finished in 575.809234ms
W0203 18:01:44.714] 2019/02/03 18:01:44 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0203 18:01:44.714] 2019/02/03 18:01:44 main.go:297: Something went wrong: starting e2e cluster: kops configuration failed: error during /workspace/kops create cluster --name e2e-121487-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-south-1a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766 --admin-access 35.184.43.102/32 --cloud aws --override cluster.spec.nodePortAccess=35.184.43.102/32: exit status 1
W0203 18:01:44.717] Traceback (most recent call last):
W0203 18:01:44.718]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0203 18:01:44.735]     main(parse_args())
W0203 18:01:44.736]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0203 18:01:44.736]     mode.start(runner_args)
W0203 18:01:44.736]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0203 18:01:44.736]     check_env(env, self.command, *args)
W0203 18:01:44.736]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0203 18:01:44.736]     subprocess.check_call(cmd, env=env)
W0203 18:01:44.736]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0203 18:01:44.736]     raise CalledProcessError(retcode, cmd)
W0203 18:01:44.737] subprocess.CalledProcessError: Command '('/workspace/kops-e2e-runner.sh', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws', '--kops-kubernetes-version=https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.232+71ca67581e3766', '--up', '--down', '--test', '--provider=aws', '--cluster=e2e-121487-dba53', '--gcp-network=e2e-121487-dba53', '--extract=local', '--ginkgo-parallel', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort', '--timeout=55m', '--kops-cluster=e2e-121487-dba53.test-cncf-aws.k8s.io', '--kops-zones=ap-south-1a', '--kops-state=s3://k8s-kops-prow/', '--kops-nodes=4', '--kops-ssh-key=/workspace/.ssh/kube_aws_rsa', '--kops-ssh-user=admin')' returned non-zero exit status 1
E0203 18:01:44.746] Command failed
I0203 18:01:44.746] process 540 exited with code 1 after 11.1m
E0203 18:01:44.747] FAIL: pull-kubernetes-e2e-kops-aws
I0203 18:01:44.747] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0203 18:01:45.244] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0203 18:01:45.305] process 4228 exited with code 0 after 0.0m
I0203 18:01:45.305] Call:  gcloud config get-value account
I0203 18:01:45.635] process 4240 exited with code 0 after 0.0m
I0203 18:01:45.635] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0203 18:01:45.635] Upload result and artifacts...
I0203 18:01:45.635] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121487
I0203 18:01:45.636] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121487/artifacts
W0203 18:01:46.755] CommandException: One or more URLs matched no objects.
E0203 18:01:46.908] Command failed
I0203 18:01:46.908] process 4252 exited with code 1 after 0.0m
W0203 18:01:46.908] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121487/artifacts not exist yet
I0203 18:01:46.909] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121487/artifacts
I0203 18:01:48.696] process 4394 exited with code 0 after 0.0m
I0203 18:01:48.697] Call:  git rev-parse HEAD
I0203 18:01:48.701] process 4918 exited with code 0 after 0.0m
... skipping 21 lines ...