This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-08 10:52
Elapsed42m36s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0808 10:53:55.211169    6305 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 10:53:55.211204    6305 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/235d7390-1708-11ed-bcf2-1217529f69d6"
I0808 10:53:55.216427    6305 app.go:128] ID for this run: "235d7390-1708-11ed-bcf2-1217529f69d6"
I0808 10:53:55.216685    6305 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0808 10:53:55.229605    6305 dumplogs.go:45] /tmp/kops.Zuybnofuy toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0808 10:53:55.229658    6305 local.go:42] ⚙️ /tmp/kops.Zuybnofuy toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0808 10:53:55.737608    6305 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0808 10:53:55.737659    6305 down.go:48] /tmp/kops.Zuybnofuy delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 10:53:55.737670    6305 local.go:42] ⚙️ /tmp/kops.Zuybnofuy delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 10:53:55.771129    6325 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 10:53:55.771240    6325 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0808 10:53:57.582639    6362 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 10:53:57.583016    6362 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/235d7390-1708-11ed-bcf2-1217529f69d6"
I0808 10:53:57.644815    6362 app.go:128] ID for this run: "235d7390-1708-11ed-bcf2-1217529f69d6"
I0808 10:53:57.645080    6362 up.go:44] Cleaning up any leaked resources from previous cluster
I0808 10:53:57.645163    6362 dumplogs.go:45] /tmp/kops.FJw0cL2Hu toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0808 10:53:57.645221    6362 local.go:42] ⚙️ /tmp/kops.FJw0cL2Hu toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0808 10:53:58.103389    6362 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0808 10:53:58.103455    6362 down.go:48] /tmp/kops.FJw0cL2Hu delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 10:53:58.103465    6362 local.go:42] ⚙️ /tmp/kops.FJw0cL2Hu delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 10:53:58.132626    6384 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 10:53:58.132714    6384 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0808 10:53:58.623016    6362 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/08 10:53:58 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0808 10:53:58.635579    6362 http.go:37] curl https://ip.jsb.workers.dev
I0808 10:53:58.752331    6362 up.go:159] /tmp/kops.FJw0cL2Hu create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.224.12.48/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0808 10:53:58.752372    6362 local.go:42] ⚙️ /tmp/kops.FJw0cL2Hu create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.224.12.48/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0808 10:53:58.783656    6392 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 10:53:58.783763    6392 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 10:53:58.802149    6392 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 515 lines ...
I0808 10:54:42.721181    6362 up.go:243] /tmp/kops.FJw0cL2Hu validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0808 10:54:42.721228    6362 local.go:42] ⚙️ /tmp/kops.FJw0cL2Hu validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0808 10:54:42.753245    6431 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 10:54:42.753340    6431 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-ed4da97961-6b857.test-cncf-aws.k8s.io

W0808 10:54:44.005388    6431 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:54:54.043281    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:55:04.076104    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:55:14.115904    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:55:24.167131    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:55:34.212573    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:55:44.258725    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:55:54.294407    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:56:04.332379    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:56:14.378721    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:56:24.413540    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
W0808 10:56:34.448285    6431 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:56:44.481179    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:56:54.531076    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:57:04.564234    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:57:14.612905    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:57:24.652594    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:57:34.694086    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 10:57:44.728301    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-2cbrl	system-cluster-critical pod "coredns-autoscaler-f85cf5c-2cbrl" is pending
Pod	kube-system/ebs-csi-node-92ncb			system-node-critical pod "ebs-csi-node-92ncb" is pending
Pod	kube-system/ebs-csi-node-hqg2z			system-node-critical pod "ebs-csi-node-hqg2z" is pending
Pod	kube-system/ebs-csi-node-kfss2			system-node-critical pod "ebs-csi-node-kfss2" is pending
Pod	kube-system/ebs-csi-node-tmlfk			system-node-critical pod "ebs-csi-node-tmlfk" is pending

Validation Failed
W0808 10:57:57.306596    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-2cbrl	system-cluster-critical pod "coredns-autoscaler-f85cf5c-2cbrl" is pending
Pod	kube-system/ebs-csi-node-92ncb			system-node-critical pod "ebs-csi-node-92ncb" is pending
Pod	kube-system/ebs-csi-node-hqg2z			system-node-critical pod "ebs-csi-node-hqg2z" is pending
Pod	kube-system/ebs-csi-node-kfss2			system-node-critical pod "ebs-csi-node-kfss2" is pending
Pod	kube-system/ebs-csi-node-tmlfk			system-node-critical pod "ebs-csi-node-tmlfk" is pending

Validation Failed
W0808 10:58:09.121143    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-2cbrl	system-cluster-critical pod "coredns-autoscaler-f85cf5c-2cbrl" is pending
Pod	kube-system/ebs-csi-node-92ncb			system-node-critical pod "ebs-csi-node-92ncb" is pending
Pod	kube-system/ebs-csi-node-hqg2z			system-node-critical pod "ebs-csi-node-hqg2z" is pending
Pod	kube-system/ebs-csi-node-kfss2			system-node-critical pod "ebs-csi-node-kfss2" is pending
Pod	kube-system/ebs-csi-node-tmlfk			system-node-critical pod "ebs-csi-node-tmlfk" is pending

Validation Failed
W0808 10:58:20.934568    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 18 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-2cbrl	system-cluster-critical pod "coredns-autoscaler-f85cf5c-2cbrl" is pending
Pod	kube-system/ebs-csi-node-92ncb			system-node-critical pod "ebs-csi-node-92ncb" is pending
Pod	kube-system/ebs-csi-node-hqg2z			system-node-critical pod "ebs-csi-node-hqg2z" is pending
Pod	kube-system/ebs-csi-node-kfss2			system-node-critical pod "ebs-csi-node-kfss2" is pending
Pod	kube-system/ebs-csi-node-tmlfk			system-node-critical pod "ebs-csi-node-tmlfk" is pending

Validation Failed
W0808 10:58:32.742512    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 15 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-2cbrl	system-cluster-critical pod "coredns-autoscaler-f85cf5c-2cbrl" is pending
Pod	kube-system/ebs-csi-node-92ncb			system-node-critical pod "ebs-csi-node-92ncb" is pending
Pod	kube-system/ebs-csi-node-hqg2z			system-node-critical pod "ebs-csi-node-hqg2z" is pending
Pod	kube-system/ebs-csi-node-kfss2			system-node-critical pod "ebs-csi-node-kfss2" is pending
Pod	kube-system/ebs-csi-node-tmlfk			system-node-critical pod "ebs-csi-node-tmlfk" is pending

Validation Failed
W0808 10:58:44.593925    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
i-0d1ce2458ea37b9f5	master	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-i-0c8b10fc801fa8e6f	system-node-critical pod "kube-proxy-i-0c8b10fc801fa8e6f" is pending

Validation Failed
W0808 10:58:56.428094    6431 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-2t97z
evicting pod kube-system/dns-controller-6684cc95dc-mrfcj
I0808 11:03:38.372343    6546 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0808 11:03:43.372622    6546 instancegroups.go:591] Stopping instance "i-0d1ce2458ea37b9f5", node "i-0d1ce2458ea37b9f5", in group "master-eu-west-2a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0808 11:03:43.633210    6546 instancegroups.go:436] waiting for 15s after terminating instance
I0808 11:03:58.633547    6546 instancegroups.go:470] Validating the cluster.
I0808 11:03:58.808882    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.9.171.36:443: connect: connection refused.
I0808 11:04:58.860440    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.9.171.36:443: i/o timeout.
I0808 11:05:58.903215    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.9.171.36:443: i/o timeout.
I0808 11:06:58.953775    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.9.171.36:443: i/o timeout.
I0808 11:07:59.002685    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.9.171.36:443: i/o timeout.
I0808 11:08:31.647851    6546 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-02bc5e1725777e019" of role "node" is not ready, node "i-0b63955347b0d912a" of role "node" is not ready, node "i-0bcda83a3817512c8" of role "node" is not ready, node "i-0c8b10fc801fa8e6f" of role "node" is not ready, system-node-critical pod "calico-node-8pccl" is not ready (calico-node).
I0808 11:09:04.422382    6546 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0808 11:09:16.309735    6546 instancegroups.go:503] Cluster validated.
I0808 11:09:16.309788    6546 instancegroups.go:470] Validating the cluster.
I0808 11:09:17.784767    6546 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-02bc5e1725777e019" of role "node" is not ready, node "i-0b63955347b0d912a" of role "node" is not ready, node "i-0bcda83a3817512c8" of role "node" is not ready.
I0808 11:09:49.624115    6546 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-02bc5e1725777e019" of role "node" is not ready, node "i-0b63955347b0d912a" of role "node" is not ready, node "i-0bcda83a3817512c8" of role "node" is not ready.
... skipping 115 lines ...
I0808 11:32:09.329315    6594 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 11:32:09.329351    6594 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/235d7390-1708-11ed-bcf2-1217529f69d6"
I0808 11:32:09.334573    6594 app.go:128] ID for this run: "235d7390-1708-11ed-bcf2-1217529f69d6"
I0808 11:32:09.334614    6594 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1 --parallel 25
I0808 11:32:09.357640    6613 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1/kubernetes-client-linux-amd64.tar.gz
F0808 11:32:11.802724    6613 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.Zuybnofuy --down
I0808 11:32:11.839891    6800 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 11:32:11.843148    6800 app.go:61] The files in RunDir shall not be part of Artifacts
I0808 11:32:11.843189    6800 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 11:32:11.843216    6800 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/235d7390-1708-11ed-bcf2-1217529f69d6"
... skipping 308 lines ...