This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-10 16:53
Elapsed35m5s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0810 16:54:15.993633    6310 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 16:54:15.993671    6310 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/d300464c-18cc-11ed-a3cd-9a8e9eec334c"
I0810 16:54:16.014569    6310 app.go:128] ID for this run: "d300464c-18cc-11ed-a3cd-9a8e9eec334c"
I0810 16:54:16.015019    6310 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0810 16:54:16.027176    6310 dumplogs.go:45] /tmp/kops.OaBpqDJ4i toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0810 16:54:16.027220    6310 local.go:42] ⚙️ /tmp/kops.OaBpqDJ4i toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0810 16:54:16.540339    6310 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0810 16:54:16.540440    6310 down.go:48] /tmp/kops.OaBpqDJ4i delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 16:54:16.540451    6310 local.go:42] ⚙️ /tmp/kops.OaBpqDJ4i delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 16:54:16.571396    6333 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 16:54:16.571496    6333 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-75-g18cba87e91
... skipping 14 lines ...
I0810 16:54:18.654654    6371 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 16:54:18.654685    6371 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/d300464c-18cc-11ed-a3cd-9a8e9eec334c"
I0810 16:54:18.700417    6371 app.go:128] ID for this run: "d300464c-18cc-11ed-a3cd-9a8e9eec334c"
I0810 16:54:18.700508    6371 up.go:44] Cleaning up any leaked resources from previous cluster
I0810 16:54:18.700558    6371 dumplogs.go:45] /tmp/kops.qP8kF2NFH toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0810 16:54:18.700600    6371 local.go:42] ⚙️ /tmp/kops.qP8kF2NFH toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0810 16:54:19.170292    6371 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0810 16:54:19.170347    6371 down.go:48] /tmp/kops.qP8kF2NFH delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 16:54:19.170367    6371 local.go:42] ⚙️ /tmp/kops.qP8kF2NFH delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 16:54:19.202482    6390 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 16:54:19.202577    6390 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0810 16:54:19.638704    6371 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/10 16:54:19 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0810 16:54:19.650304    6371 http.go:37] curl https://ip.jsb.workers.dev
I0810 16:54:19.808758    6371 up.go:159] /tmp/kops.qP8kF2NFH create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.224.236.190/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0810 16:54:19.808816    6371 local.go:42] ⚙️ /tmp/kops.qP8kF2NFH create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.224.236.190/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0810 16:54:19.843721    6402 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 16:54:19.843810    6402 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 16:54:19.861785    6402 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 515 lines ...
I0810 16:55:03.845454    6371 up.go:243] /tmp/kops.qP8kF2NFH validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0810 16:55:03.845544    6371 local.go:42] ⚙️ /tmp/kops.qP8kF2NFH validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0810 16:55:03.877717    6442 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 16:55:03.877823    6442 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-ed4da97961-6b857.test-cncf-aws.k8s.io

W0810 16:55:05.180313    6442 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0810 16:55:15.229332    6442 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:55:25.283649    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:55:35.318548    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:55:45.352094    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:55:55.388407    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:56:05.435487    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:56:15.488110    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:56:25.539665    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:56:35.572769    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:56:45.606084    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:56:55.638205    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:57:05.715695    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:57:15.785684    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:57:25.819563    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:57:35.863910    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:57:45.900158    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:57:55.939187    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:58:05.993532    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:58:16.030089    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:58:26.062835    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 16:58:36.096259    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-m6swq	system-cluster-critical pod "coredns-autoscaler-f85cf5c-m6swq" is pending
Pod	kube-system/ebs-csi-node-2k8f7			system-node-critical pod "ebs-csi-node-2k8f7" is pending
Pod	kube-system/ebs-csi-node-44vh4			system-node-critical pod "ebs-csi-node-44vh4" is pending
Pod	kube-system/ebs-csi-node-lqxj9			system-node-critical pod "ebs-csi-node-lqxj9" is pending
Pod	kube-system/ebs-csi-node-v8n2v			system-node-critical pod "ebs-csi-node-v8n2v" is pending

Validation Failed
W0810 16:58:48.787489    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-m6swq	system-cluster-critical pod "coredns-autoscaler-f85cf5c-m6swq" is pending
Pod	kube-system/ebs-csi-node-2k8f7			system-node-critical pod "ebs-csi-node-2k8f7" is pending
Pod	kube-system/ebs-csi-node-44vh4			system-node-critical pod "ebs-csi-node-44vh4" is pending
Pod	kube-system/ebs-csi-node-lqxj9			system-node-critical pod "ebs-csi-node-lqxj9" is pending
Pod	kube-system/ebs-csi-node-v8n2v			system-node-critical pod "ebs-csi-node-v8n2v" is pending

Validation Failed
W0810 16:59:00.686377    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 18 lines ...
Pod	kube-system/ebs-csi-node-2k8f7			system-node-critical pod "ebs-csi-node-2k8f7" is pending
Pod	kube-system/ebs-csi-node-44vh4			system-node-critical pod "ebs-csi-node-44vh4" is pending
Pod	kube-system/ebs-csi-node-lqxj9			system-node-critical pod "ebs-csi-node-lqxj9" is pending
Pod	kube-system/ebs-csi-node-v8n2v			system-node-critical pod "ebs-csi-node-v8n2v" is pending
Pod	kube-system/kube-proxy-i-0357f33c33143e667	system-node-critical pod "kube-proxy-i-0357f33c33143e667" is pending

Validation Failed
W0810 16:59:12.647414    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 14 lines ...
Pod	kube-system/calico-node-vfs7n	system-node-critical pod "calico-node-vfs7n" is pending
Pod	kube-system/calico-node-zlpp5	system-node-critical pod "calico-node-zlpp5" is pending
Pod	kube-system/ebs-csi-node-2k8f7	system-node-critical pod "ebs-csi-node-2k8f7" is pending
Pod	kube-system/ebs-csi-node-lqxj9	system-node-critical pod "ebs-csi-node-lqxj9" is pending
Pod	kube-system/ebs-csi-node-v8n2v	system-node-critical pod "ebs-csi-node-v8n2v" is pending

Validation Failed
W0810 16:59:24.442387    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 13 lines ...
Pod	kube-system/ebs-csi-node-2k8f7			system-node-critical pod "ebs-csi-node-2k8f7" is pending
Pod	kube-system/ebs-csi-node-lqxj9			system-node-critical pod "ebs-csi-node-lqxj9" is pending
Pod	kube-system/ebs-csi-node-v8n2v			system-node-critical pod "ebs-csi-node-v8n2v" is pending
Pod	kube-system/kube-proxy-i-060eecba0520748bf	system-node-critical pod "kube-proxy-i-060eecba0520748bf" is pending
Pod	kube-system/kube-proxy-i-0f4686c82bf26152c	system-node-critical pod "kube-proxy-i-0f4686c82bf26152c" is pending

Validation Failed
W0810 16:59:36.347888    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-ndhp7	system-node-critical pod "calico-node-ndhp7" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-v8n2v	system-node-critical pod "ebs-csi-node-v8n2v" is pending

Validation Failed
W0810 16:59:48.190672    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 548 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-cd65g
evicting pod kube-system/dns-controller-6684cc95dc-cz6ml
I0810 17:04:29.169543    6555 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0810 17:04:34.171022    6555 instancegroups.go:591] Stopping instance "i-03ad997001130670a", node "i-03ad997001130670a", in group "master-eu-west-2a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0810 17:04:34.443141    6555 instancegroups.go:436] waiting for 15s after terminating instance
I0810 17:04:49.450793    6555 instancegroups.go:470] Validating the cluster.
I0810 17:04:49.620191    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.176.184.129:443: connect: connection refused.
I0810 17:05:49.653364    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.176.184.129:443: i/o timeout.
I0810 17:06:49.691252    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.176.184.129:443: i/o timeout.
I0810 17:07:49.754675    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.176.184.129:443: i/o timeout.
I0810 17:08:19.794261    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host.
I0810 17:08:52.467737    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-dskf4" is not ready (calico-kube-controllers), system-node-critical pod "calico-node-s2tz9" is not ready (calico-node), system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-klrcj" is pending.
I0810 17:09:24.283882    6555 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0810 17:09:36.173420    6555 instancegroups.go:503] Cluster validated.
I0810 17:09:36.173471    6555 instancegroups.go:470] Validating the cluster.
I0810 17:09:37.704710    6555 instancegroups.go:503] Cluster validated.
... skipping 9 lines ...
I0810 17:12:15.275906    6555 instancegroups.go:503] Cluster validated.
I0810 17:12:15.275998    6555 instancegroups.go:400] Draining the node: "i-0357f33c33143e667".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-b25cl, kube-system/ebs-csi-node-44vh4
evicting pod kube-system/coredns-autoscaler-f85cf5c-m6swq
evicting pod kube-system/coredns-5c44b6cf7d-q5ht2
evicting pod kube-system/coredns-5c44b6cf7d-79rdf
error when evicting pods/"coredns-5c44b6cf7d-79rdf" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-79rdf
I0810 17:12:28.095097    6555 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0810 17:12:33.095402    6555 instancegroups.go:591] Stopping instance "i-0357f33c33143e667", node "i-0357f33c33143e667", in group "nodes-eu-west-2a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0810 17:12:33.351470    6555 instancegroups.go:436] waiting for 15s after terminating instance
I0810 17:12:48.358961    6555 instancegroups.go:470] Validating the cluster.
I0810 17:12:50.567524    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": InstanceGroup "nodes-eu-west-2a" did not have enough nodes 3 vs 4, system-node-critical pod "calico-node-6zwr2" is pending, system-node-critical pod "kube-proxy-i-0357f33c33143e667" is not ready (kube-proxy).
... skipping 84 lines ...
I0810 17:25:06.784371    6605 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/d300464c-18cc-11ed-a3cd-9a8e9eec334c"
I0810 17:25:06.788375    6605 app.go:128] ID for this run: "d300464c-18cc-11ed-a3cd-9a8e9eec334c"
I0810 17:25:06.788411    6605 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18 --parallel 25
I0810 17:25:06.807913    6626 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 17:25:06.814532    6626 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18/kubernetes-client-linux-amd64.tar.gz
F0810 17:25:08.596092    6626 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.OaBpqDJ4i --down
I0810 17:25:08.632682    6816 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 17:25:08.635949    6816 app.go:61] The files in RunDir shall not be part of Artifacts
I0810 17:25:08.635985    6816 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 17:25:08.636013    6816 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/d300464c-18cc-11ed-a3cd-9a8e9eec334c"
... skipping 294 lines ...