This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-09 01:53
Elapsed38m35s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0809 01:54:32.983322    6349 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 01:54:32.983368    6349 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ee069942-1785-11ed-a40c-ee17f6f0723b"
I0809 01:54:32.993521    6349 app.go:128] ID for this run: "ee069942-1785-11ed-a40c-ee17f6f0723b"
I0809 01:54:32.993823    6349 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0809 01:54:33.002715    6349 dumplogs.go:45] /tmp/kops.tK48tob2b toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0809 01:54:33.002787    6349 local.go:42] ⚙️ /tmp/kops.tK48tob2b toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0809 01:54:33.536202    6349 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0809 01:54:33.536295    6349 down.go:48] /tmp/kops.tK48tob2b delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 01:54:33.536311    6349 local.go:42] ⚙️ /tmp/kops.tK48tob2b delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 01:54:33.576160    6372 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 01:54:33.576457    6372 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0809 01:54:35.660562    6408 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 01:54:35.660586    6408 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ee069942-1785-11ed-a40c-ee17f6f0723b"
I0809 01:54:35.698571    6408 app.go:128] ID for this run: "ee069942-1785-11ed-a40c-ee17f6f0723b"
I0809 01:54:35.698692    6408 up.go:44] Cleaning up any leaked resources from previous cluster
I0809 01:54:35.698744    6408 dumplogs.go:45] /tmp/kops.U8epgeVPC toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0809 01:54:35.698790    6408 local.go:42] ⚙️ /tmp/kops.U8epgeVPC toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0809 01:54:36.212654    6408 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0809 01:54:36.212717    6408 down.go:48] /tmp/kops.U8epgeVPC delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 01:54:36.212759    6408 local.go:42] ⚙️ /tmp/kops.U8epgeVPC delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 01:54:36.249207    6426 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 01:54:36.249358    6426 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0809 01:54:36.724593    6408 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/09 01:54:36 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0809 01:54:36.743205    6408 http.go:37] curl https://ip.jsb.workers.dev
I0809 01:54:36.915929    6408 up.go:159] /tmp/kops.U8epgeVPC create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.225.192.224/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I0809 01:54:36.915973    6408 local.go:42] ⚙️ /tmp/kops.U8epgeVPC create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.225.192.224/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I0809 01:54:36.953927    6439 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 01:54:36.954062    6439 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 01:54:36.973455    6439 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 515 lines ...
I0809 01:55:24.042009    6408 up.go:243] /tmp/kops.U8epgeVPC validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0809 01:55:24.042091    6408 local.go:42] ⚙️ /tmp/kops.U8epgeVPC validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0809 01:55:24.085101    6478 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 01:55:24.085254    6478 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-ed4da97961-6b857.test-cncf-aws.k8s.io

W0809 01:55:25.471129    6478 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0809 01:55:35.504006    6478 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:55:45.543539    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:55:55.585132    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:56:05.622385    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:56:15.669233    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:56:25.717438    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:56:35.752800    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:56:45.796176    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:56:55.829188    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:57:05.869414    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:57:15.906819    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:57:25.941751    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:57:35.980142    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:57:46.016790    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:57:56.051256    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:58:06.101938    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:58:16.151209    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:58:26.196675    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 01:58:36.236938    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 17 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-zsjcz	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zsjcz" is pending
Pod	kube-system/ebs-csi-node-bwt27			system-node-critical pod "ebs-csi-node-bwt27" is pending
Pod	kube-system/ebs-csi-node-mcggp			system-node-critical pod "ebs-csi-node-mcggp" is pending
Pod	kube-system/ebs-csi-node-wrxxw			system-node-critical pod "ebs-csi-node-wrxxw" is pending
Pod	kube-system/ebs-csi-node-znh5f			system-node-critical pod "ebs-csi-node-znh5f" is pending

Validation Failed
W0809 01:58:49.198343    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-zsjcz	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zsjcz" is pending
Pod	kube-system/ebs-csi-node-bwt27			system-node-critical pod "ebs-csi-node-bwt27" is pending
Pod	kube-system/ebs-csi-node-mcggp			system-node-critical pod "ebs-csi-node-mcggp" is pending
Pod	kube-system/ebs-csi-node-wrxxw			system-node-critical pod "ebs-csi-node-wrxxw" is pending
Pod	kube-system/ebs-csi-node-znh5f			system-node-critical pod "ebs-csi-node-znh5f" is pending

Validation Failed
W0809 01:59:01.125558    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 17 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-zsjcz	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zsjcz" is pending
Pod	kube-system/ebs-csi-node-bwt27			system-node-critical pod "ebs-csi-node-bwt27" is pending
Pod	kube-system/ebs-csi-node-mcggp			system-node-critical pod "ebs-csi-node-mcggp" is pending
Pod	kube-system/ebs-csi-node-wrxxw			system-node-critical pod "ebs-csi-node-wrxxw" is pending
Pod	kube-system/ebs-csi-node-znh5f			system-node-critical pod "ebs-csi-node-znh5f" is pending

Validation Failed
W0809 01:59:13.062061    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 14 lines ...
Pod	kube-system/calico-node-8d29j	system-node-critical pod "calico-node-8d29j" is pending
Pod	kube-system/calico-node-ql8mb	system-node-critical pod "calico-node-ql8mb" is pending
Pod	kube-system/ebs-csi-node-mcggp	system-node-critical pod "ebs-csi-node-mcggp" is pending
Pod	kube-system/ebs-csi-node-wrxxw	system-node-critical pod "ebs-csi-node-wrxxw" is pending
Pod	kube-system/ebs-csi-node-znh5f	system-node-critical pod "ebs-csi-node-znh5f" is pending

Validation Failed
W0809 01:59:25.080279    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 13 lines ...
Pod	kube-system/calico-node-8d29j	system-node-critical pod "calico-node-8d29j" is pending
Pod	kube-system/calico-node-ql8mb	system-node-critical pod "calico-node-ql8mb" is pending
Pod	kube-system/ebs-csi-node-mcggp	system-node-critical pod "ebs-csi-node-mcggp" is pending
Pod	kube-system/ebs-csi-node-wrxxw	system-node-critical pod "ebs-csi-node-wrxxw" is pending
Pod	kube-system/ebs-csi-node-znh5f	system-node-critical pod "ebs-csi-node-znh5f" is pending

Validation Failed
W0809 01:59:37.104522    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 9 lines ...
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-7qjx5	system-node-critical pod "calico-node-7qjx5" is not ready (calico-node)
Pod	kube-system/calico-node-ql8mb	system-node-critical pod "calico-node-ql8mb" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-wrxxw	system-node-critical pod "ebs-csi-node-wrxxw" is pending
Pod	kube-system/ebs-csi-node-znh5f	system-node-critical pod "ebs-csi-node-znh5f" is pending

Validation Failed
W0809 01:59:49.044702    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 6 lines ...
i-0ab0404c7e37d0d33	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-i-0067245935bf3fc5f	system-node-critical pod "kube-proxy-i-0067245935bf3fc5f" is pending

Validation Failed
W0809 02:00:00.999719    6478 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-szhjb
evicting pod kube-system/dns-controller-6684cc95dc-t9lkd
I0809 02:04:48.469571    6594 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0809 02:04:53.469909    6594 instancegroups.go:591] Stopping instance "i-02281ffcd6b07cd36", node "i-02281ffcd6b07cd36", in group "master-eu-west-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0809 02:04:53.709995    6594 instancegroups.go:436] waiting for 15s after terminating instance
I0809 02:05:08.718522    6594 instancegroups.go:470] Validating the cluster.
I0809 02:05:08.918179    6594 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.170.95.163:443: connect: connection refused.
I0809 02:06:08.965769    6594 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.170.95.163:443: i/o timeout.
I0809 02:07:09.021485    6594 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.170.95.163:443: i/o timeout.
I0809 02:08:09.070291    6594 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.170.95.163:443: i/o timeout.
I0809 02:09:09.113971    6594 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.170.95.163:443: i/o timeout.
I0809 02:09:41.976352    6594 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-dqbpp" is not ready (calico-kube-controllers).
I0809 02:10:13.910122    6594 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-dqbpp" is not ready (calico-kube-controllers).
I0809 02:10:45.865258    6594 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-dqbpp" is not ready (calico-kube-controllers).
I0809 02:11:17.987510    6594 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-dqbpp" is not ready (calico-kube-controllers).
I0809 02:11:49.902445    6594 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-dqbpp" is not ready (calico-kube-controllers).
I0809 02:12:21.950142    6594 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 60 lines ...
I0809 02:26:05.479948    6594 instancegroups.go:503] Cluster validated.
I0809 02:26:05.480215    6594 instancegroups.go:400] Draining the node: "i-0ab0404c7e37d0d33".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-kktk7, kube-system/ebs-csi-node-bwt27
evicting pod kube-system/coredns-autoscaler-f85cf5c-zsjcz
evicting pod kube-system/coredns-5c44b6cf7d-2qr5k
evicting pod kube-system/coredns-5c44b6cf7d-fp82t
error when evicting pods/"coredns-5c44b6cf7d-2qr5k" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-2qr5k
I0809 02:26:18.314714    6594 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0809 02:26:23.316955    6594 instancegroups.go:591] Stopping instance "i-0ab0404c7e37d0d33", node "i-0ab0404c7e37d0d33", in group "nodes-eu-west-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0809 02:26:23.557627    6594 instancegroups.go:436] waiting for 15s after terminating instance
I0809 02:26:38.558015    6594 instancegroups.go:470] Validating the cluster.
I0809 02:26:41.114302    6594 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-node-critical pod "calico-node-5sskn" is pending, system-node-critical pod "kube-proxy-i-0ab0404c7e37d0d33" is not ready (kube-proxy).
... skipping 40 lines ...
I0809 02:28:36.084250    6641 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 02:28:36.084299    6641 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ee069942-1785-11ed-a40c-ee17f6f0723b"
I0809 02:28:36.088817    6641 app.go:128] ID for this run: "ee069942-1785-11ed-a40c-ee17f6f0723b"
I0809 02:28:36.088862    6641 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be --parallel 25
I0809 02:28:36.129488    6659 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be/kubernetes-client-linux-amd64.tar.gz
F0809 02:28:39.372801    6659 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.tK48tob2b --down
I0809 02:28:39.418020    6847 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 02:28:39.420821    6847 app.go:61] The files in RunDir shall not be part of Artifacts
I0809 02:28:39.420859    6847 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 02:28:39.420896    6847 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ee069942-1785-11ed-a40c-ee17f6f0723b"
... skipping 314 lines ...