This job view page is being replaced by Spyglass soon. Check out the new job view.
PRh3poteto: Add prometheus exporter to expose validation result in kops-controller
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-04-15 16:32
Elapsed4m59s
Revisiond2c49584a16bfd53e80958fde3d5a0a6544cfad6
Refs 10175

No Test Failures!


Error lines from build-log.txt

... skipping 445 lines ...
echo "https://storage.googleapis.com/kops-ci/pulls/pull-kops-e2e-cni-calico/pull-c4a2737283/1.21.0-alpha.4+c4a2737283" > /home/prow/go/src/k8s.io/kops/.bazelbuild/upload/latest-ci.txt
gsutil -h "Cache-Control:private, max-age=0, no-transform" cp /home/prow/go/src/k8s.io/kops/.bazelbuild/upload/latest-ci.txt gs://kops-ci/pulls/pull-kops-e2e-cni-calico/pull-c4a2737283
Copying file:///home/prow/go/src/k8s.io/kops/.bazelbuild/upload/latest-ci.txt [Content-Type=text/plain]...
/ [0 files][    0.0 B/  112.0 B]                                                
/ [1 files][  112.0 B/  112.0 B]                                                
Operation completed over 1 objects/112.0 B.                                      
I0415 16:36:57.424540    3001 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/04/15 16:36:57 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0415 16:36:57.434331    3001 http.go:37] curl https://ip.jsb.workers.dev
I0415 16:36:57.598591    3001 up.go:136] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-e22a44d414-51e6f.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210325 --channel=alpha --networking=calico --container-runtime=containerd --node-size=t3.large --admin-access 35.194.59.152/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-1a --master-size c5.large
I0415 16:36:57.620596   10524 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0415 16:36:57.620722   10524 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0415 16:36:57.678322   10524 create_cluster.go:730] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0415 16:36:58.225549   10524 new_cluster.go:1011]  Cloud Provider ID = aws
I0415 16:36:58.485422   10524 subnets.go:180] Assigned CIDR 172.20.32.0/19 to subnet us-west-1a

error determining default DNS zone: error querying zones: Throttling: 
	status code: 400, request id: b33311bf-ea35-4221-a3c4-23b826f8ef7e
I0415 16:37:17.780311    3001 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-e22a44d414-51e6f.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0415 16:37:17.798122   10535 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0415 16:37:17.798224   10535 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-e22a44d414-51e6f.test-cncf-aws.k8s.io" not found
W0415 16:37:18.293253    3001 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0415 16:37:18.293344    3001 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-e22a44d414-51e6f.test-cncf-aws.k8s.io --yes
I0415 16:37:18.308997   10546 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0415 16:37:18.309089   10546 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-e22a44d414-51e6f.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...