This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtsmetana: Kubelet: Fix volumemanager test race
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-02-04 08:30
Elapsed12m10s
Revision
Buildergke-prow-containerd-pool-99179761-xlfp
Refs master:0bd35d1b
73404:45464f03
pod095b2ccc-2857-11e9-ba32-0a580a6c0346
infra-commit3bee26ee0
job-versionv1.14.0-alpha.2.244+0a369edbfec319
pod095b2ccc-2857-11e9-ba32-0a580a6c0346
repok8s.io/kubernetes
repo-commit0a369edbfec319efbbbd9fad0727896c46caa0d0
repos{u'k8s.io/kubernetes': u'master:0bd35d1b684d440a12c5963f5cc8518f9404084f,73404:45464f03494bf5c7efb09e44cc07e7125c748cb3', u'k8s.io/release': u'master'}
revisionv1.14.0-alpha.2.244+0a369edbfec319

References

PR #73404 Kubelet: Fix volumemanager test race

Test Failures


Up 0.61s

kops configuration failed: error during /workspace/kops create cluster --name e2e-121493-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.244+0a369edbfec319 --admin-access 130.211.120.150/32 --cloud aws --override cluster.spec.nodePortAccess=130.211.120.150/32: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 959 lines ...
I0204 08:41:33.333] sha1sum(kubernetes-test.tar.gz)=7e571af935db505b240d6e867f86c2c4bb320170
I0204 08:41:33.333] 
I0204 08:41:33.333] Extracting kubernetes-test.tar.gz into /go/src/k8s.io/kubernetes/kubernetes
W0204 08:41:40.761] 2019/02/04 08:41:40 process.go:155: Step '/workspace/get-kube.sh' finished in 13.10307879s
W0204 08:41:40.761] 2019/02/04 08:41:40 process.go:153: Running: /workspace/kops get clusters e2e-121493-dba53.test-cncf-aws.k8s.io
W0204 08:41:58.924] 
W0204 08:41:58.925] error reading cluster configuration "e2e-121493-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121493-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0204 08:41:58.925] 	status code: 401, request id: a3fb3221-af14-4d77-a5e7-730c81d71cb0
W0204 08:41:58.932] 2019/02/04 08:41:58 process.go:155: Step '/workspace/kops get clusters e2e-121493-dba53.test-cncf-aws.k8s.io' finished in 18.170699339s
W0204 08:41:58.932] 2019/02/04 08:41:58 process.go:153: Running: /workspace/kops create cluster --name e2e-121493-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.244+0a369edbfec319 --admin-access 130.211.120.150/32 --cloud aws --override cluster.spec.nodePortAccess=130.211.120.150/32
W0204 08:41:59.064] I0204 08:41:59.064525    4017 create_cluster.go:1448] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
W0204 08:41:59.538] 
W0204 08:41:59.539] error reading cluster configuration "e2e-121493-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121493-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0204 08:41:59.539] 	status code: 401, request id: 5c404e52-836c-4ffe-874d-11a0fdcdda17
W0204 08:41:59.544] 2019/02/04 08:41:59 process.go:155: Step '/workspace/kops create cluster --name e2e-121493-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.244+0a369edbfec319 --admin-access 130.211.120.150/32 --cloud aws --override cluster.spec.nodePortAccess=130.211.120.150/32' finished in 611.97812ms
W0204 08:41:59.597] 2019/02/04 08:41:59 process.go:153: Running: /workspace/kops export kubecfg e2e-121493-dba53.test-cncf-aws.k8s.io
W0204 08:42:00.157] 
W0204 08:42:00.157] error reading cluster configuration: error reading cluster configuration "e2e-121493-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121493-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0204 08:42:00.157] 	status code: 401, request id: 407e1905-1ca2-4b76-8652-cff79993c4d5
W0204 08:42:00.162] 2019/02/04 08:42:00 process.go:155: Step '/workspace/kops export kubecfg e2e-121493-dba53.test-cncf-aws.k8s.io' finished in 564.804502ms
W0204 08:42:00.163] 2019/02/04 08:42:00 process.go:153: Running: /workspace/kops get clusters e2e-121493-dba53.test-cncf-aws.k8s.io
W0204 08:42:00.774] 
W0204 08:42:00.774] error reading cluster configuration "e2e-121493-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121493-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0204 08:42:00.774] 	status code: 401, request id: 27e10134-9b42-4c33-bfad-669f961d5f42
W0204 08:42:00.779] 2019/02/04 08:42:00 process.go:155: Step '/workspace/kops get clusters e2e-121493-dba53.test-cncf-aws.k8s.io' finished in 616.782626ms
W0204 08:42:00.792] 2019/02/04 08:42:00 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0204 08:42:00.793] 2019/02/04 08:42:00 main.go:297: Something went wrong: starting e2e cluster: kops configuration failed: error during /workspace/kops create cluster --name e2e-121493-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2a --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.244+0a369edbfec319 --admin-access 130.211.120.150/32 --cloud aws --override cluster.spec.nodePortAccess=130.211.120.150/32: exit status 1
W0204 08:42:00.796] Traceback (most recent call last):
W0204 08:42:00.796]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0204 08:42:00.811]     main(parse_args())
W0204 08:42:00.812]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0204 08:42:00.812]     mode.start(runner_args)
W0204 08:42:00.812]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0204 08:42:00.812]     check_env(env, self.command, *args)
W0204 08:42:00.812]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0204 08:42:00.812]     subprocess.check_call(cmd, env=env)
W0204 08:42:00.812]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0204 08:42:00.833]     raise CalledProcessError(retcode, cmd)
W0204 08:42:00.834] subprocess.CalledProcessError: Command '('/workspace/kops-e2e-runner.sh', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws', '--kops-kubernetes-version=https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.244+0a369edbfec319', '--up', '--down', '--test', '--provider=aws', '--cluster=e2e-121493-dba53', '--gcp-network=e2e-121493-dba53', '--extract=local', '--ginkgo-parallel', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort', '--timeout=55m', '--kops-cluster=e2e-121493-dba53.test-cncf-aws.k8s.io', '--kops-zones=ap-southeast-2a', '--kops-state=s3://k8s-kops-prow/', '--kops-nodes=4', '--kops-ssh-key=/workspace/.ssh/kube_aws_rsa', '--kops-ssh-user=admin')' returned non-zero exit status 1
E0204 08:42:00.843] Command failed
I0204 08:42:00.843] process 540 exited with code 1 after 10.8m
E0204 08:42:00.843] FAIL: pull-kubernetes-e2e-kops-aws
I0204 08:42:00.843] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0204 08:42:11.373] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0204 08:42:11.433] process 4063 exited with code 0 after 0.2m
I0204 08:42:11.433] Call:  gcloud config get-value account
I0204 08:42:14.289] process 4075 exited with code 0 after 0.0m
I0204 08:42:14.290] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0204 08:42:14.290] Upload result and artifacts...
I0204 08:42:14.290] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121493
I0204 08:42:14.290] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121493/artifacts
W0204 08:42:15.379] CommandException: One or more URLs matched no objects.
E0204 08:42:15.507] Command failed
I0204 08:42:15.507] process 4087 exited with code 1 after 0.0m
W0204 08:42:15.507] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121493/artifacts not exist yet
I0204 08:42:15.507] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121493/artifacts
I0204 08:42:17.521] process 4229 exited with code 0 after 0.0m
I0204 08:42:17.522] Call:  git rev-parse HEAD
I0204 08:42:17.527] process 4753 exited with code 0 after 0.0m
... skipping 21 lines ...