This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-07 10:52
Elapsed35m33s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0807 10:54:02.216970    6341 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0807 10:54:02.216996    6341 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/f8661a1d-163e-11ed-bcf2-1217529f69d6"
I0807 10:54:02.264949    6341 app.go:128] ID for this run: "f8661a1d-163e-11ed-bcf2-1217529f69d6"
I0807 10:54:02.265428    6341 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0807 10:54:02.279940    6341 dumplogs.go:45] /tmp/kops.0BFFduWEv toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0807 10:54:02.280009    6341 local.go:42] ⚙️ /tmp/kops.0BFFduWEv toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0807 10:54:02.797679    6341 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0807 10:54:02.797729    6341 down.go:48] /tmp/kops.0BFFduWEv delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0807 10:54:02.797757    6341 local.go:42] ⚙️ /tmp/kops.0BFFduWEv delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0807 10:54:02.831138    6362 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 10:54:02.831242    6362 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0807 10:54:04.598730    6398 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0807 10:54:04.598777    6398 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/f8661a1d-163e-11ed-bcf2-1217529f69d6"
I0807 10:54:04.625876    6398 app.go:128] ID for this run: "f8661a1d-163e-11ed-bcf2-1217529f69d6"
I0807 10:54:04.625983    6398 up.go:44] Cleaning up any leaked resources from previous cluster
I0807 10:54:04.626028    6398 dumplogs.go:45] /tmp/kops.ydJxQNLjc toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0807 10:54:04.626062    6398 local.go:42] ⚙️ /tmp/kops.ydJxQNLjc toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0807 10:54:05.125346    6398 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0807 10:54:05.125409    6398 down.go:48] /tmp/kops.ydJxQNLjc delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0807 10:54:05.125420    6398 local.go:42] ⚙️ /tmp/kops.ydJxQNLjc delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0807 10:54:05.159401    6420 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 10:54:05.159510    6420 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0807 10:54:05.640999    6398 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/07 10:54:05 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0807 10:54:05.657393    6398 http.go:37] curl https://ip.jsb.workers.dev
I0807 10:54:05.771030    6398 up.go:159] /tmp/kops.ydJxQNLjc create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 34.67.147.98/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I0807 10:54:05.771073    6398 local.go:42] ⚙️ /tmp/kops.ydJxQNLjc create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 34.67.147.98/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I0807 10:54:05.803321    6429 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 10:54:05.803437    6429 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 10:54:05.825259    6429 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 515 lines ...
I0807 10:54:50.643553    6398 up.go:243] /tmp/kops.ydJxQNLjc validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0807 10:54:50.643605    6398 local.go:42] ⚙️ /tmp/kops.ydJxQNLjc validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0807 10:54:50.680040    6468 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 10:54:50.680148    6468 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-ed4da97961-6b857.test-cncf-aws.k8s.io

W0807 10:54:51.944686    6468 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:55:01.995031    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
W0807 10:55:12.081392    6468 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:55:22.119093    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:55:32.159528    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:55:42.211968    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:55:52.254515    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:56:02.292251    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:56:12.326480    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:56:22.374441    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
W0807 10:56:32.409733    6468 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:56:42.444375    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:56:52.486588    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:57:02.530939    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:57:12.567535    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:57:22.603808    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:57:32.640941    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:57:42.676760    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:57:52.712438    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:58:02.752267    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 10:58:12.795423    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-crrnc	system-cluster-critical pod "coredns-autoscaler-f85cf5c-crrnc" is pending
Pod	kube-system/ebs-csi-node-96kls			system-node-critical pod "ebs-csi-node-96kls" is pending
Pod	kube-system/ebs-csi-node-g5xm4			system-node-critical pod "ebs-csi-node-g5xm4" is pending
Pod	kube-system/ebs-csi-node-l8dq6			system-node-critical pod "ebs-csi-node-l8dq6" is pending
Pod	kube-system/ebs-csi-node-zz9xc			system-node-critical pod "ebs-csi-node-zz9xc" is pending

Validation Failed
W0807 10:58:25.597760    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 18 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-crrnc	system-cluster-critical pod "coredns-autoscaler-f85cf5c-crrnc" is pending
Pod	kube-system/ebs-csi-node-96kls			system-node-critical pod "ebs-csi-node-96kls" is pending
Pod	kube-system/ebs-csi-node-g5xm4			system-node-critical pod "ebs-csi-node-g5xm4" is pending
Pod	kube-system/ebs-csi-node-l8dq6			system-node-critical pod "ebs-csi-node-l8dq6" is pending
Pod	kube-system/ebs-csi-node-zz9xc			system-node-critical pod "ebs-csi-node-zz9xc" is pending

Validation Failed
W0807 10:58:37.549848    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 15 lines ...
Pod	kube-system/calico-node-rnns7	system-node-critical pod "calico-node-rnns7" is pending
Pod	kube-system/ebs-csi-node-96kls	system-node-critical pod "ebs-csi-node-96kls" is pending
Pod	kube-system/ebs-csi-node-g5xm4	system-node-critical pod "ebs-csi-node-g5xm4" is pending
Pod	kube-system/ebs-csi-node-l8dq6	system-node-critical pod "ebs-csi-node-l8dq6" is pending
Pod	kube-system/ebs-csi-node-zz9xc	system-node-critical pod "ebs-csi-node-zz9xc" is pending

Validation Failed
W0807 10:58:49.462919    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 13 lines ...
Pod	kube-system/calico-node-b8gmj	system-node-critical pod "calico-node-b8gmj" is pending
Pod	kube-system/calico-node-rnns7	system-node-critical pod "calico-node-rnns7" is pending
Pod	kube-system/ebs-csi-node-96kls	system-node-critical pod "ebs-csi-node-96kls" is pending
Pod	kube-system/ebs-csi-node-g5xm4	system-node-critical pod "ebs-csi-node-g5xm4" is pending
Pod	kube-system/ebs-csi-node-l8dq6	system-node-critical pod "ebs-csi-node-l8dq6" is pending

Validation Failed
W0807 10:59:01.498402    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 11 lines ...
Pod	kube-system/calico-node-b8gmj			system-node-critical pod "calico-node-b8gmj" is not ready (calico-node)
Pod	kube-system/calico-node-rnns7			system-node-critical pod "calico-node-rnns7" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-96kls			system-node-critical pod "ebs-csi-node-96kls" is pending
Pod	kube-system/ebs-csi-node-l8dq6			system-node-critical pod "ebs-csi-node-l8dq6" is pending
Pod	kube-system/kube-proxy-i-093bf28e0af8ef9b5	system-node-critical pod "kube-proxy-i-093bf28e0af8ef9b5" is pending

Validation Failed
W0807 10:59:13.567222    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 6 lines ...
i-0b93f5454ae2238d4	master	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-i-03d085e025ab43b77	system-node-critical pod "kube-proxy-i-03d085e025ab43b77" is pending

Validation Failed
W0807 10:59:25.521924    6468 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-bktw2
evicting pod kube-system/dns-controller-6684cc95dc-mq99z
I0807 11:04:12.575819    6579 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0807 11:04:17.575960    6579 instancegroups.go:591] Stopping instance "i-0b93f5454ae2238d4", node "i-0b93f5454ae2238d4", in group "master-eu-central-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0807 11:04:17.849236    6579 instancegroups.go:436] waiting for 15s after terminating instance
I0807 11:04:32.849497    6579 instancegroups.go:470] Validating the cluster.
I0807 11:04:33.024158    6579 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.184.20.195:443: connect: connection refused.
I0807 11:05:33.087741    6579 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.184.20.195:443: i/o timeout.
I0807 11:06:33.149228    6579 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.184.20.195:443: i/o timeout.
I0807 11:07:33.191773    6579 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.184.20.195:443: i/o timeout.
I0807 11:08:33.250364    6579 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.184.20.195:443: i/o timeout.
I0807 11:09:06.298725    6579 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-drwpw" is not ready (ebs-plugin).
I0807 11:09:38.299754    6579 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0807 11:09:50.339407    6579 instancegroups.go:503] Cluster validated.
I0807 11:09:50.339461    6579 instancegroups.go:470] Validating the cluster.
I0807 11:09:51.949460    6579 instancegroups.go:503] Cluster validated.
I0807 11:09:51.949517    6579 instancegroups.go:311] Tainting 4 nodes in "nodes-eu-central-1a" instancegroup.
... skipping 51 lines ...
I0807 11:23:24.299909    6579 instancegroups.go:503] Cluster validated.
I0807 11:23:24.299982    6579 instancegroups.go:400] Draining the node: "i-093bf28e0af8ef9b5".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-ckc48, kube-system/ebs-csi-node-zz9xc
evicting pod kube-system/coredns-autoscaler-f85cf5c-crrnc
evicting pod kube-system/coredns-5c44b6cf7d-49kp6
evicting pod kube-system/coredns-5c44b6cf7d-tt4jk
error when evicting pods/"coredns-5c44b6cf7d-49kp6" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-49kp6
I0807 11:23:37.198573    6579 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0807 11:23:42.198917    6579 instancegroups.go:591] Stopping instance "i-093bf28e0af8ef9b5", node "i-093bf28e0af8ef9b5", in group "nodes-eu-central-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0807 11:23:42.473880    6579 instancegroups.go:436] waiting for 15s after terminating instance
I0807 11:23:57.477723    6579 instancegroups.go:470] Validating the cluster.
I0807 11:23:59.715325    6579 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-node-critical pod "kube-proxy-i-093bf28e0af8ef9b5" is not ready (kube-proxy).
... skipping 38 lines ...
I0807 11:24:49.704476    6628 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0807 11:24:49.704506    6628 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/f8661a1d-163e-11ed-bcf2-1217529f69d6"
I0807 11:24:49.708519    6628 app.go:128] ID for this run: "f8661a1d-163e-11ed-bcf2-1217529f69d6"
I0807 11:24:49.708574    6628 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.16+985c9202ccd250 --parallel 25
I0807 11:24:49.733599    6647 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.16+985c9202ccd250/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.16+985c9202ccd250/kubernetes-client-linux-amd64.tar.gz
F0807 11:24:52.320786    6647 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.16+985c9202ccd250: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.0BFFduWEv --down
I0807 11:24:52.360030    6837 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 11:24:52.361750    6837 app.go:61] The files in RunDir shall not be part of Artifacts
I0807 11:24:52.361776    6837 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0807 11:24:52.361799    6837 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/f8661a1d-163e-11ed-bcf2-1217529f69d6"
... skipping 322 lines ...