This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtsmetana: Kubelet: Fix volumemanager test race
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-02-02 17:39
Elapsed12m15s
Revision
Buildergke-prow-containerd-pool-99179761-t05g
Refs master:0c2613c7
73404:45464f03
pod65da5a32-2711-11e9-a735-0a580a6c013f
infra-commitc5956ef61
job-versionv1.14.0-alpha.2.225+e48ac9f04ca0bc
pod65da5a32-2711-11e9-a735-0a580a6c013f
repok8s.io/kubernetes
repo-commite48ac9f04ca0bcecf79a9dc7f17c63324e037b10
repos{u'k8s.io/kubernetes': u'master:0c2613c71a87f850190a8c1084d4de1e18336c07,73404:45464f03494bf5c7efb09e44cc07e7125c748cb3', u'k8s.io/release': u'master'}
revisionv1.14.0-alpha.2.225+e48ac9f04ca0bc

Test Failures


Up 0.56s

kops configuration failed: error during /workspace/kops create cluster --name e2e-121477-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.192.2.195/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.2.195/32: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 960 lines ...
I0202 17:50:41.181] sha1sum(kubernetes-test.tar.gz)=e57f0e5bf73e8e5025254e47bbc1f43a67890423
I0202 17:50:41.181] 
I0202 17:50:41.181] Extracting kubernetes-test.tar.gz into /go/src/k8s.io/kubernetes/kubernetes
W0202 17:50:48.955] 2019/02/02 17:50:48 process.go:155: Step '/workspace/get-kube.sh' finished in 13.760424295s
W0202 17:50:48.955] 2019/02/02 17:50:48 process.go:153: Running: /workspace/kops get clusters e2e-121477-dba53.test-cncf-aws.k8s.io
W0202 17:51:05.943] 
W0202 17:51:05.943] error reading cluster configuration "e2e-121477-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121477-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0202 17:51:05.943] 	status code: 401, request id: 8bf74615-2fde-4646-b526-491cbec767b3
W0202 17:51:05.950] 2019/02/02 17:51:05 process.go:155: Step '/workspace/kops get clusters e2e-121477-dba53.test-cncf-aws.k8s.io' finished in 16.994301594s
W0202 17:51:05.963] 2019/02/02 17:51:05 process.go:153: Running: /workspace/kops create cluster --name e2e-121477-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.192.2.195/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.2.195/32
W0202 17:51:06.105] I0202 17:51:06.105418    4171 create_cluster.go:1448] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
W0202 17:51:06.506] 
W0202 17:51:06.506] error reading cluster configuration "e2e-121477-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121477-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0202 17:51:06.506] 	status code: 401, request id: aeb3d17d-4d09-4f1b-b39a-a76a3de3de0b
W0202 17:51:06.512] 2019/02/02 17:51:06 process.go:155: Step '/workspace/kops create cluster --name e2e-121477-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.192.2.195/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.2.195/32' finished in 549.012586ms
W0202 17:51:06.538] 2019/02/02 17:51:06 process.go:153: Running: /workspace/kops export kubecfg e2e-121477-dba53.test-cncf-aws.k8s.io
W0202 17:51:07.137] 
W0202 17:51:07.138] error reading cluster configuration: error reading cluster configuration "e2e-121477-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121477-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0202 17:51:07.138] 	status code: 401, request id: ab09668d-f545-423e-a5c4-ffa4c19746e1
W0202 17:51:07.143] 2019/02/02 17:51:07 process.go:155: Step '/workspace/kops export kubecfg e2e-121477-dba53.test-cncf-aws.k8s.io' finished in 604.283127ms
W0202 17:51:07.143] 2019/02/02 17:51:07 process.go:153: Running: /workspace/kops get clusters e2e-121477-dba53.test-cncf-aws.k8s.io
W0202 17:51:07.650] 
W0202 17:51:07.651] error reading cluster configuration "e2e-121477-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121477-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0202 17:51:07.651] 	status code: 401, request id: 9b7e2e88-5a2f-4974-b2e2-489d69700562
W0202 17:51:07.656] 2019/02/02 17:51:07 process.go:155: Step '/workspace/kops get clusters e2e-121477-dba53.test-cncf-aws.k8s.io' finished in 512.823295ms
W0202 17:51:07.740] 2019/02/02 17:51:07 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0202 17:51:07.741] 2019/02/02 17:51:07 main.go:315: Something went wrong: starting e2e cluster: kops configuration failed: error during /workspace/kops create cluster --name e2e-121477-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.192.2.195/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.2.195/32: exit status 1
W0202 17:51:07.745] Traceback (most recent call last):
W0202 17:51:07.746]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 762, in <module>
W0202 17:51:07.765]     main(parse_args())
W0202 17:51:07.765]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 613, in main
W0202 17:51:07.765]     mode.start(runner_args)
W0202 17:51:07.766]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0202 17:51:07.766]     check_env(env, self.command, *args)
W0202 17:51:07.766]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0202 17:51:07.766]     subprocess.check_call(cmd, env=env)
W0202 17:51:07.766]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0202 17:51:07.796]     raise CalledProcessError(retcode, cmd)
W0202 17:51:07.797] subprocess.CalledProcessError: Command '('/workspace/kops-e2e-runner.sh', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws', '--kops-kubernetes-version=https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc', '--up', '--down', '--test', '--provider=aws', '--cluster=e2e-121477-dba53', '--gcp-network=e2e-121477-dba53', '--extract=local', '--ginkgo-parallel', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort', '--timeout=55m', '--kops-cluster=e2e-121477-dba53.test-cncf-aws.k8s.io', '--kops-zones=ca-central-1b', '--kops-state=s3://k8s-kops-prow/', '--kops-nodes=4', '--kops-ssh-key=/workspace/.ssh/kube_aws_rsa', '--kops-ssh-user=admin')' returned non-zero exit status 1
E0202 17:51:07.808] Command failed
I0202 17:51:07.808] process 539 exited with code 1 after 10.8m
E0202 17:51:07.808] FAIL: pull-kubernetes-e2e-kops-aws
I0202 17:51:07.809] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0202 17:51:08.452] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0202 17:51:08.505] process 4216 exited with code 0 after 0.0m
I0202 17:51:08.505] Call:  gcloud config get-value account
I0202 17:51:10.276] process 4228 exited with code 0 after 0.0m
I0202 17:51:10.276] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0202 17:51:10.276] Upload result and artifacts...
I0202 17:51:10.277] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121477
I0202 17:51:10.277] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121477/artifacts
W0202 17:51:11.760] CommandException: One or more URLs matched no objects.
E0202 17:51:11.921] Command failed
I0202 17:51:11.921] process 4240 exited with code 1 after 0.0m
W0202 17:51:11.921] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121477/artifacts not exist yet
I0202 17:51:11.922] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121477/artifacts
I0202 17:51:20.877] process 4382 exited with code 0 after 0.1m
I0202 17:51:20.878] Call:  git rev-parse HEAD
I0202 17:51:20.882] process 4906 exited with code 0 after 0.0m
... skipping 21 lines ...