This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-07-28 14:00
Elapsed17m35s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 212 lines ...
I0728 14:01:46.597001    6149 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0728 14:01:46.597024    6149 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/949ade74-0e7d-11ed-b6e4-d28c295d3c4e"
I0728 14:01:46.644158    6149 app.go:129] ID for this run: "949ade74-0e7d-11ed-b6e4-d28c295d3c4e"
I0728 14:01:46.644448    6149 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519
I0728 14:01:46.652503    6149 dumplogs.go:45] /tmp/kops.SW1poQEv8 toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0728 14:01:46.652563    6149 local.go:42] ⚙️ /tmp/kops.SW1poQEv8 toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0728 14:01:47.183334    6149 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0728 14:01:47.183379    6149 down.go:48] /tmp/kops.SW1poQEv8 delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0728 14:01:47.183393    6149 local.go:42] ⚙️ /tmp/kops.SW1poQEv8 delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0728 14:01:47.205580    6170 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0728 14:01:47.205724    6170 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ v == \v ]]
+ KOPS_BASE_URL=
++ kops-download-release v1.24.0
++ local kops
+++ mktemp -t kops.XXXXXXXXX
++ kops=/tmp/kops.9dHP7GAYh
... skipping 10 lines ...
I0728 14:01:51.228760    6205 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0728 14:01:51.228783    6205 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/949ade74-0e7d-11ed-b6e4-d28c295d3c4e"
I0728 14:01:51.282888    6205 app.go:129] ID for this run: "949ade74-0e7d-11ed-b6e4-d28c295d3c4e"
I0728 14:01:51.282965    6205 up.go:44] Cleaning up any leaked resources from previous cluster
I0728 14:01:51.282996    6205 dumplogs.go:45] /tmp/kops.9dHP7GAYh toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0728 14:01:51.283023    6205 local.go:42] ⚙️ /tmp/kops.9dHP7GAYh toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0728 14:01:51.779752    6205 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0728 14:01:51.779966    6205 down.go:48] /tmp/kops.9dHP7GAYh delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0728 14:01:51.780029    6205 local.go:42] ⚙️ /tmp/kops.9dHP7GAYh delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0728 14:01:51.800007    6228 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0728 14:01:51.800093    6228 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" not found
I0728 14:01:52.241132    6205 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/07/28 14:01:52 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0728 14:01:52.253753    6205 http.go:37] curl https://ip.jsb.workers.dev
I0728 14:01:52.422828    6205 template.go:58] /tmp/kops.9dHP7GAYh toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template831082882/manifest.yaml --values /tmp/kops-template831082882/values.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
I0728 14:01:52.422875    6205 local.go:42] ⚙️ /tmp/kops.9dHP7GAYh toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template831082882/manifest.yaml --values /tmp/kops-template831082882/values.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
I0728 14:01:52.441192    6239 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0728 14:01:52.441287    6239 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0728 14:01:52.680149    6205 create.go:33] /tmp/kops.9dHP7GAYh create --filename /tmp/kops-template831082882/manifest.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
... skipping 31 lines ...
I0728 14:02:01.775960    6260 keypair.go:225] Issuing new certificate: "etcd-peers-ca-events"
I0728 14:02:01.812575    6260 keypair.go:225] Issuing new certificate: "apiserver-aggregator-ca"
W0728 14:02:01.844864    6260 vfs_castore.go:379] CA private key was not found
I0728 14:02:01.892970    6260 keypair.go:225] Issuing new certificate: "service-account"
I0728 14:02:01.904504    6260 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0728 14:02:03.264520    6260 executor.go:111] Tasks: 51 done / 107 total; 24 can run
W0728 14:02:04.559522    6260 executor.go:139] error running task "Subnet/us-west-2a.e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" (9m58s remaining to succeed): error listing Subnets: InvalidSubnetID.NotFound: The subnet ID 'subnet-0e4b17ab7151f503c' does not exist
	status code: 400, request id: 4d8a5a8d-5a38-4715-b5b7-100a91980643
W0728 14:02:04.559583    6260 executor.go:139] error running task "SecurityGroup/masters.e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" (9m58s remaining to succeed): error listing SecurityGroups: InvalidGroup.NotFound: The security group 'sg-03f69b498fcbda276' does not exist
	status code: 400, request id: abc9b4ac-3560-49f5-bc6c-544e7820cdfa
I0728 14:02:04.559632    6260 executor.go:111] Tasks: 73 done / 107 total; 18 can run
I0728 14:02:05.870288    6260 executor.go:111] Tasks: 91 done / 107 total; 13 can run
I0728 14:02:07.445250    6260 executor.go:111] Tasks: 103 done / 107 total; 2 can run
I0728 14:02:08.843260    6260 executor.go:155] No progress made, sleeping before retrying 2 task(s)
I0728 14:02:18.843581    6260 executor.go:111] Tasks: 103 done / 107 total; 2 can run
... skipping 15 lines ...
I0728 14:02:34.534222    6205 up.go:243] /tmp/kops.9dHP7GAYh validate cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0728 14:02:34.534318    6205 local.go:42] ⚙️ /tmp/kops.9dHP7GAYh validate cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0728 14:02:34.555108    6276 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0728 14:02:34.555231    6276 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io

W0728 14:02:35.632505    6276 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:02:45.665563    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:02:55.719926    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:03:05.758392    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:03:15.799022    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:03:25.861013    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:03:35.901174    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:03:45.954562    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:03:55.998112    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:04:06.038857    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:04:16.081827    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:04:26.114849    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:04:36.152914    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:04:46.184542    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:04:56.220570    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:05:06.260567    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:05:16.294626    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:05:26.345416    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:05:36.388859    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:05:46.425399    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:05:56.469345    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:06:06.503281    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:06:16.542491    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:06:26.576362    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:06:36.629734    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:06:46.680883    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
W0728 14:06:56.740067    6276 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:07:06.784900    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:07:16.825465    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:07:26.858096    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:07:36.895333    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:07:46.931953    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:07:56.979970    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:08:07.018719    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:08:17.053289    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:08:27.095081    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:08:37.131263    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:08:47.169325    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0728 14:08:57.224843    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 23 lines ...
Pod	kube-system/kube-proxy-ip-172-20-0-89.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-0-89.us-west-2.compute.internal" is pending
Pod	kube-system/metrics-server-7c9d469d74-tqm92				system-cluster-critical pod "metrics-server-7c9d469d74-tqm92" is pending
Pod	kube-system/metrics-server-7c9d469d74-wsv6g				system-cluster-critical pod "metrics-server-7c9d469d74-wsv6g" is pending
Pod	kube-system/node-local-dns-28nwx					system-node-critical pod "node-local-dns-28nwx" is pending
Pod	kube-system/node-local-dns-cwz99					system-node-critical pod "node-local-dns-cwz99" is pending

Validation Failed
W0728 14:09:09.504296    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 21 lines ...
Pod	kube-system/coredns-autoscaler-6d96c59bbf-bgcrn			system-cluster-critical pod "coredns-autoscaler-6d96c59bbf-bgcrn" is pending
Pod	kube-system/metrics-server-7c9d469d74-tqm92			system-cluster-critical pod "metrics-server-7c9d469d74-tqm92" is pending
Pod	kube-system/metrics-server-7c9d469d74-wsv6g			system-cluster-critical pod "metrics-server-7c9d469d74-wsv6g" is pending
Pod	kube-system/node-local-dns-28nwx				system-node-critical pod "node-local-dns-28nwx" is pending
Pod	kube-system/node-local-dns-cwz99				system-node-critical pod "node-local-dns-cwz99" is pending

Validation Failed
W0728 14:09:20.940105    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 20 lines ...
Pod	kube-system/coredns-autoscaler-6d96c59bbf-bgcrn			system-cluster-critical pod "coredns-autoscaler-6d96c59bbf-bgcrn" is pending
Pod	kube-system/metrics-server-7c9d469d74-tqm92			system-cluster-critical pod "metrics-server-7c9d469d74-tqm92" is pending
Pod	kube-system/metrics-server-7c9d469d74-wsv6g			system-cluster-critical pod "metrics-server-7c9d469d74-wsv6g" is pending
Pod	kube-system/node-local-dns-24mkb				system-node-critical pod "node-local-dns-24mkb" is pending
Pod	kube-system/node-local-dns-pqh5j				system-node-critical pod "node-local-dns-pqh5j" is pending

Validation Failed
W0728 14:09:32.427492    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 15 lines ...
Pod	kube-system/cilium-f74jw					system-node-critical pod "cilium-f74jw" is not ready (cilium-agent)
Pod	kube-system/cilium-tnskf					system-node-critical pod "cilium-tnskf" is not ready (cilium-agent)
Pod	kube-system/metrics-server-7c9d469d74-tqm92			system-cluster-critical pod "metrics-server-7c9d469d74-tqm92" is not ready (metrics-server)
Pod	kube-system/metrics-server-7c9d469d74-wsv6g			system-cluster-critical pod "metrics-server-7c9d469d74-wsv6g" is not ready (metrics-server)
Pod	kube-system/node-local-dns-24mkb				system-node-critical pod "node-local-dns-24mkb" is pending

Validation Failed
W0728 14:09:43.993083    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 10 lines ...
Pod	kube-system/aws-load-balancer-controller-86f4ddd6d7-xf4zw	system-cluster-critical pod "aws-load-balancer-controller-86f4ddd6d7-xf4zw" is pending
Pod	kube-system/cilium-dj9pr					system-node-critical pod "cilium-dj9pr" is not ready (cilium-agent)
Pod	kube-system/cilium-tnskf					system-node-critical pod "cilium-tnskf" is not ready (cilium-agent)
Pod	kube-system/metrics-server-7c9d469d74-tqm92			system-cluster-critical pod "metrics-server-7c9d469d74-tqm92" is not ready (metrics-server)
Pod	kube-system/metrics-server-7c9d469d74-wsv6g			system-cluster-critical pod "metrics-server-7c9d469d74-wsv6g" is not ready (metrics-server)

Validation Failed
W0728 14:09:55.445927    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-86f4ddd6d7-xf4zw	system-cluster-critical pod "aws-load-balancer-controller-86f4ddd6d7-xf4zw" is pending
Pod	kube-system/cilium-dj9pr					system-node-critical pod "cilium-dj9pr" is not ready (cilium-agent)
Pod	kube-system/metrics-server-7c9d469d74-wsv6g			system-cluster-critical pod "metrics-server-7c9d469d74-wsv6g" is not ready (metrics-server)

Validation Failed
W0728 14:10:07.206353    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 6 lines ...
ip-172-20-0-89.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-86f4ddd6d7-xf4zw	system-cluster-critical pod "aws-load-balancer-controller-86f4ddd6d7-xf4zw" is pending

Validation Failed
W0728 14:10:18.673833    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 6 lines ...
ip-172-20-0-89.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-86f4ddd6d7-xf4zw	system-cluster-critical pod "aws-load-balancer-controller-86f4ddd6d7-xf4zw" is pending

Validation Failed
W0728 14:10:30.173777    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 6 lines ...
ip-172-20-0-89.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-86f4ddd6d7-xf4zw	system-cluster-critical pod "aws-load-balancer-controller-86f4ddd6d7-xf4zw" is pending

Validation Failed
W0728 14:10:41.669848    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 6 lines ...
ip-172-20-0-89.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-86f4ddd6d7-xf4zw	system-cluster-critical pod "aws-load-balancer-controller-86f4ddd6d7-xf4zw" is pending

Validation Failed
W0728 14:10:53.226754    6276 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 1027 lines ...
I0728 14:15:18.384777    6370 channel_version.go:140] manifest Match for "node-termination-handler.aws": Channel=s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/bootstrap-channel.yaml Id=k8s-1.11 ManifestHash=3882f7620a5920b6d86d8e840566dd4acf92ff57a877997fd1befbb7dd764f6f SystemGeneration=1
NAME			CURRENT	UPDATE									PKI
networking.cilium.io	-	0d7e665f356f97c0137cda33453dbd5d40d5fbca30b62df8ef30e2db894a36a1	no
I0728 14:15:18.512899    6370 addon.go:188] Applying update from "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml"
I0728 14:15:18.512950    6370 s3fs.go:329] Reading file "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml"
I0728 14:15:18.625276    6370 apply.go:83] Running command: kubectl apply -f /tmp/channel1898769904/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0728 14:15:20.502730    6370 apply.go:86] error running kubectl apply -f /tmp/channel1898769904/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0728 14:15:20.502787    6370 apply.go:87] serviceaccount/cilium serverside-applied
serviceaccount/cilium-operator serverside-applied
serviceaccount/hubble-relay serverside-applied
configmap/cilium-config serverside-applied
configmap/hubble-relay-config serverside-applied
clusterrole.rbac.authorization.k8s.io/cilium serverside-applied
... skipping 5 lines ...
service/hubble-relay serverside-applied
daemonset.apps/cilium serverside-applied
deployment.apps/cilium-operator serverside-applied
certificate.cert-manager.io/hubble-server-certs serverside-applied
certificate.cert-manager.io/hubble-relay-client-certs serverside-applied
poddisruptionbudget.policy/cilium-operator serverside-applied
Error from server: failed to create typed patch object: .spec.template.spec.containers[name="hubble-relay"].ports: element 0: associative list with keys has an element that omits key field "protocol" (and doesn't have default value)
E0728 14:15:20.502826    6370 apply.go:52] failed to apply the manifest: error running kubectl: exit status 1
I0728 14:15:20.502907    6370 apply.go:83] Running command: kubectl replace -f /tmp/channel1898769904/manifest.yaml --field-manager=kops
I0728 14:15:23.321537    6370 apply.go:86] error running kubectl replace -f /tmp/channel1898769904/manifest.yaml --field-manager=kops
I0728 14:15:23.321578    6370 apply.go:87] serviceaccount/cilium replaced
serviceaccount/cilium-operator replaced
serviceaccount/hubble-relay replaced
configmap/cilium-config replaced
configmap/hubble-relay-config replaced
clusterrole.rbac.authorization.k8s.io/cilium replaced
... skipping 4 lines ...
clusterrolebinding.rbac.authorization.k8s.io/hubble-relay replaced
daemonset.apps/cilium replaced
deployment.apps/cilium-operator replaced
certificate.cert-manager.io/hubble-server-certs replaced
certificate.cert-manager.io/hubble-relay-client-certs replaced
poddisruptionbudget.policy/cilium-operator replaced
Error from server (Invalid): error when replacing "/tmp/channel1898769904/manifest.yaml": Service "hubble-relay" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error from server (NotFound): error when replacing "/tmp/channel1898769904/manifest.yaml": deployments.apps "hubble-relay" not found
E0728 14:15:23.321611    6370 apply.go:61] failed to replace manifest: error running kubectl: exit status 1
I0728 14:15:23.321680    6370 apply.go:83] Running command: kubectl apply -f /tmp/channel1898769904/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0728 14:15:24.957955    6370 apply.go:86] error running kubectl apply -f /tmp/channel1898769904/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0728 14:15:24.957985    6370 apply.go:87] serviceaccount/cilium serverside-applied
serviceaccount/cilium-operator serverside-applied
serviceaccount/hubble-relay serverside-applied
configmap/cilium-config serverside-applied
configmap/hubble-relay-config serverside-applied
clusterrole.rbac.authorization.k8s.io/cilium serverside-applied
... skipping 5 lines ...
service/hubble-relay serverside-applied
daemonset.apps/cilium serverside-applied
deployment.apps/cilium-operator serverside-applied
certificate.cert-manager.io/hubble-server-certs serverside-applied
certificate.cert-manager.io/hubble-relay-client-certs serverside-applied
poddisruptionbudget.policy/cilium-operator serverside-applied
Error from server: failed to create typed patch object: .spec.template.spec.containers[name="hubble-relay"].ports: element 0: associative list with keys has an element that omits key field "protocol" (and doesn't have default value)

updating "networking.cilium.io": error applying update from "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml": failed to apply the manifest: error running kubectl: exit status 1
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.SW1poQEv8 --down
I0728 14:15:24.982489    6417 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0728 14:15:24.983288    6417 app.go:62] The files in RunDir shall not be part of Artifacts
I0728 14:15:24.983307    6417 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0728 14:15:24.983329    6417 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/949ade74-0e7d-11ed-b6e4-d28c295d3c4e"
... skipping 322 lines ...