This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtsmetana: Kubelet: Fix volumemanager test race
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-02-03 05:54
Elapsed13m8s
Revision
Buildergke-prow-containerd-pool-99179761-xhp5
Refs master:8de38858
73404:45464f03
pod12cd8709-2778-11e9-819a-0a580a6c025e
infra-commit40269330c
job-versionv1.14.0-alpha.2.227+ac66a913e74561
pod12cd8709-2778-11e9-819a-0a580a6c025e
repok8s.io/kubernetes
repo-commitac66a913e74561fc0f53f680153f9183450eefaf
repos{u'k8s.io/kubernetes': u'master:8de388583ed9d112b579285f29665ee9db9b9eca,73404:45464f03494bf5c7efb09e44cc07e7125c748cb3', u'k8s.io/release': u'master'}
revisionv1.14.0-alpha.2.227+ac66a913e74561

Test Failures


Up 0.65s

kops configuration failed: error during /workspace/kops create cluster --name e2e-121482-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.227+ac66a913e74561 --admin-access 35.192.223.3/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.223.3/32: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 956 lines ...
I0203 06:06:27.522] sha1sum(kubernetes-test.tar.gz)=e57f0e5bf73e8e5025254e47bbc1f43a67890423
I0203 06:06:27.523] 
I0203 06:06:27.523] Extracting kubernetes-test.tar.gz into /go/src/k8s.io/kubernetes/kubernetes
W0203 06:06:35.123] 2019/02/03 06:06:35 process.go:155: Step '/workspace/get-kube.sh' finished in 13.848490572s
W0203 06:06:35.123] 2019/02/03 06:06:35 process.go:153: Running: /workspace/kops get clusters e2e-121482-dba53.test-cncf-aws.k8s.io
W0203 06:06:56.732] 
W0203 06:06:56.732] error reading cluster configuration "e2e-121482-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121482-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 06:06:56.733] 	status code: 401, request id: 852e796f-b143-45b0-a2b6-4507064e947f
W0203 06:06:56.739] 2019/02/03 06:06:56 process.go:155: Step '/workspace/kops get clusters e2e-121482-dba53.test-cncf-aws.k8s.io' finished in 21.615539593s
W0203 06:06:56.739] 2019/02/03 06:06:56 process.go:153: Running: /workspace/kops create cluster --name e2e-121482-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.227+ac66a913e74561 --admin-access 35.192.223.3/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.223.3/32
W0203 06:06:56.881] I0203 06:06:56.881193    4822 create_cluster.go:1448] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
W0203 06:06:57.382] 
W0203 06:06:57.382] error reading cluster configuration "e2e-121482-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121482-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 06:06:57.383] 	status code: 401, request id: 9c7fe99c-cb92-4a1f-b215-b594949d2898
W0203 06:06:57.388] 2019/02/03 06:06:57 process.go:155: Step '/workspace/kops create cluster --name e2e-121482-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.227+ac66a913e74561 --admin-access 35.192.223.3/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.223.3/32' finished in 648.805135ms
W0203 06:06:57.426] 2019/02/03 06:06:57 process.go:153: Running: /workspace/kops export kubecfg e2e-121482-dba53.test-cncf-aws.k8s.io
W0203 06:06:58.073] 
W0203 06:06:58.073] error reading cluster configuration: error reading cluster configuration "e2e-121482-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121482-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 06:06:58.074] 	status code: 401, request id: f02bb9c3-f016-4140-aa2c-019a8f96411b
W0203 06:06:58.079] 2019/02/03 06:06:58 process.go:155: Step '/workspace/kops export kubecfg e2e-121482-dba53.test-cncf-aws.k8s.io' finished in 652.872136ms
W0203 06:06:58.079] 2019/02/03 06:06:58 process.go:153: Running: /workspace/kops get clusters e2e-121482-dba53.test-cncf-aws.k8s.io
W0203 06:06:58.568] 
W0203 06:06:58.568] error reading cluster configuration "e2e-121482-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121482-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials
W0203 06:06:58.568] 	status code: 401, request id: d5fdee7c-02e5-4df2-a701-5653d61582d6
W0203 06:06:58.573] 2019/02/03 06:06:58 process.go:155: Step '/workspace/kops get clusters e2e-121482-dba53.test-cncf-aws.k8s.io' finished in 494.006682ms
W0203 06:06:58.612] 2019/02/03 06:06:58 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0203 06:06:58.613] 2019/02/03 06:06:58 main.go:297: Something went wrong: starting e2e cluster: kops configuration failed: error during /workspace/kops create cluster --name e2e-121482-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ca-central-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.227+ac66a913e74561 --admin-access 35.192.223.3/32 --cloud aws --override cluster.spec.nodePortAccess=35.192.223.3/32: exit status 1
W0203 06:06:58.616] Traceback (most recent call last):
W0203 06:06:58.616]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0203 06:06:58.635]     main(parse_args())
W0203 06:06:58.635]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0203 06:06:58.635]     mode.start(runner_args)
W0203 06:06:58.635]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0203 06:06:58.635]     check_env(env, self.command, *args)
W0203 06:06:58.635]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0203 06:06:58.636]     subprocess.check_call(cmd, env=env)
W0203 06:06:58.636]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0203 06:06:58.660]     raise CalledProcessError(retcode, cmd)
W0203 06:06:58.661] subprocess.CalledProcessError: Command '('/workspace/kops-e2e-runner.sh', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws', '--kops-kubernetes-version=https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.227+ac66a913e74561', '--up', '--down', '--test', '--provider=aws', '--cluster=e2e-121482-dba53', '--gcp-network=e2e-121482-dba53', '--extract=local', '--ginkgo-parallel', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort', '--timeout=55m', '--kops-cluster=e2e-121482-dba53.test-cncf-aws.k8s.io', '--kops-zones=ca-central-1b', '--kops-state=s3://k8s-kops-prow/', '--kops-nodes=4', '--kops-ssh-key=/workspace/.ssh/kube_aws_rsa', '--kops-ssh-user=admin')' returned non-zero exit status 1
E0203 06:06:58.670] Command failed
I0203 06:06:58.671] process 540 exited with code 1 after 11.8m
E0203 06:06:58.671] FAIL: pull-kubernetes-e2e-kops-aws
I0203 06:06:58.671] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0203 06:07:08.759] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0203 06:07:08.820] process 4870 exited with code 0 after 0.2m
I0203 06:07:08.820] Call:  gcloud config get-value account
I0203 06:07:09.241] process 4882 exited with code 0 after 0.0m
I0203 06:07:09.242] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0203 06:07:09.242] Upload result and artifacts...
I0203 06:07:09.242] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121482
I0203 06:07:09.242] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121482/artifacts
W0203 06:07:10.447] CommandException: One or more URLs matched no objects.
E0203 06:07:10.578] Command failed
I0203 06:07:10.578] process 4894 exited with code 1 after 0.0m
W0203 06:07:10.578] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121482/artifacts not exist yet
I0203 06:07:10.578] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121482/artifacts
I0203 06:07:12.600] process 5036 exited with code 0 after 0.0m
I0203 06:07:12.601] Call:  git rev-parse HEAD
I0203 06:07:12.605] process 5560 exited with code 0 after 0.0m
... skipping 21 lines ...