This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-07-30 14:00
Elapsed15m40s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 212 lines ...
I0730 14:02:02.302399    6204 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0730 14:02:02.302423    6204 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ecd4654c-100f-11ed-bfc8-0eb9b3896f8e"
I0730 14:02:02.317547    6204 app.go:129] ID for this run: "ecd4654c-100f-11ed-bfc8-0eb9b3896f8e"
I0730 14:02:02.317883    6204 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519
I0730 14:02:02.327434    6204 dumplogs.go:45] /tmp/kops.kb3WmLfIZ toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0730 14:02:02.327632    6204 local.go:42] ⚙️ /tmp/kops.kb3WmLfIZ toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0730 14:02:02.864161    6204 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0730 14:02:02.864231    6204 down.go:48] /tmp/kops.kb3WmLfIZ delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 14:02:02.864245    6204 local.go:42] ⚙️ /tmp/kops.kb3WmLfIZ delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 14:02:02.887983    6227 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 14:02:02.888131    6227 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ v == \v ]]
+ KOPS_BASE_URL=
++ kops-download-release v1.24.0
++ local kops
+++ mktemp -t kops.XXXXXXXXX
++ kops=/tmp/kops.ZqQMoixry
... skipping 10 lines ...
I0730 14:02:05.099732    6261 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0730 14:02:05.099760    6261 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ecd4654c-100f-11ed-bfc8-0eb9b3896f8e"
I0730 14:02:05.104037    6261 app.go:129] ID for this run: "ecd4654c-100f-11ed-bfc8-0eb9b3896f8e"
I0730 14:02:05.104136    6261 up.go:44] Cleaning up any leaked resources from previous cluster
I0730 14:02:05.104180    6261 dumplogs.go:45] /tmp/kops.ZqQMoixry toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0730 14:02:05.104230    6261 local.go:42] ⚙️ /tmp/kops.ZqQMoixry toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0730 14:02:05.551425    6261 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0730 14:02:05.551543    6261 down.go:48] /tmp/kops.ZqQMoixry delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 14:02:05.551555    6261 local.go:42] ⚙️ /tmp/kops.ZqQMoixry delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 14:02:05.572409    6283 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 14:02:05.572677    6283 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" not found
I0730 14:02:06.030208    6261 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/07/30 14:02:06 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0730 14:02:06.045784    6261 http.go:37] curl https://ip.jsb.workers.dev
I0730 14:02:06.167272    6261 template.go:58] /tmp/kops.ZqQMoixry toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template3108529117/manifest.yaml --values /tmp/kops-template3108529117/values.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
I0730 14:02:06.167323    6261 local.go:42] ⚙️ /tmp/kops.ZqQMoixry toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template3108529117/manifest.yaml --values /tmp/kops-template3108529117/values.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
I0730 14:02:06.190571    6293 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 14:02:06.190694    6293 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 14:02:06.298906    6261 create.go:33] /tmp/kops.ZqQMoixry create --filename /tmp/kops-template3108529117/manifest.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
... skipping 66 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:02:48.819538    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:02:58.867606    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:03:08.909100    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:03:18.961043    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:03:28.995756    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:03:39.041398    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:03:49.092328    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:03:59.147703    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:04:09.184207    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:04:19.219932    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:04:29.254343    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:04:39.297901    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:04:49.335647    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:04:59.379051    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:05:09.423586    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:05:19.471213    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:05:29.510860    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:05:39.552429    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:05:49.590824    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:05:59.628368    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:06:09.668450    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:06:19.705851    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:06:29.755532    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:06:39.799564    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:06:49.838546    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:06:59.881183    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:07:09.922427    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 14:07:19.974466    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 25 lines ...
Pod	kube-system/kube-proxy-ip-172-20-0-81.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-0-81.eu-west-2.compute.internal" is pending
Pod	kube-system/metrics-server-7c9d469d74-27b5n				system-cluster-critical pod "metrics-server-7c9d469d74-27b5n" is pending
Pod	kube-system/metrics-server-7c9d469d74-zkndx				system-cluster-critical pod "metrics-server-7c9d469d74-zkndx" is pending
Pod	kube-system/node-local-dns-6dkm7					system-node-critical pod "node-local-dns-6dkm7" is pending
Pod	kube-system/node-local-dns-kqn9h					system-node-critical pod "node-local-dns-kqn9h" is pending

Validation Failed
W0730 14:07:32.588553    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 24 lines ...
Pod	kube-system/coredns-78cd66cbc9-lsqw7				system-cluster-critical pod "coredns-78cd66cbc9-lsqw7" is pending
Pod	kube-system/coredns-autoscaler-6d96c59bbf-wk5ff			system-cluster-critical pod "coredns-autoscaler-6d96c59bbf-wk5ff" is pending
Pod	kube-system/metrics-server-7c9d469d74-27b5n			system-cluster-critical pod "metrics-server-7c9d469d74-27b5n" is pending
Pod	kube-system/metrics-server-7c9d469d74-zkndx			system-cluster-critical pod "metrics-server-7c9d469d74-zkndx" is pending
Pod	kube-system/node-local-dns-mqpc7				system-node-critical pod "node-local-dns-mqpc7" is pending

Validation Failed
W0730 14:07:44.409460    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 21 lines ...
Pod	kube-system/coredns-78cd66cbc9-lsqw7				system-cluster-critical pod "coredns-78cd66cbc9-lsqw7" is not ready (coredns)
Pod	kube-system/coredns-autoscaler-6d96c59bbf-wk5ff			system-cluster-critical pod "coredns-autoscaler-6d96c59bbf-wk5ff" is pending
Pod	kube-system/metrics-server-7c9d469d74-27b5n			system-cluster-critical pod "metrics-server-7c9d469d74-27b5n" is not ready (metrics-server)
Pod	kube-system/metrics-server-7c9d469d74-zkndx			system-cluster-critical pod "metrics-server-7c9d469d74-zkndx" is pending
Pod	kube-system/node-local-dns-mqpc7				system-node-critical pod "node-local-dns-mqpc7" is pending

Validation Failed
W0730 14:07:56.224033    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 10 lines ...
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending
Pod	kube-system/cilium-8jmds					system-node-critical pod "cilium-8jmds" is not ready (cilium-agent)
Pod	kube-system/cilium-kc5bh					system-node-critical pod "cilium-kc5bh" is not ready (cilium-agent)
Pod	kube-system/metrics-server-7c9d469d74-27b5n			system-cluster-critical pod "metrics-server-7c9d469d74-27b5n" is not ready (metrics-server)
Pod	kube-system/metrics-server-7c9d469d74-zkndx			system-cluster-critical pod "metrics-server-7c9d469d74-zkndx" is pending

Validation Failed
W0730 14:08:08.118098    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending
Pod	kube-system/cilium-kc5bh					system-node-critical pod "cilium-kc5bh" is not ready (cilium-agent)
Pod	kube-system/metrics-server-7c9d469d74-zkndx			system-cluster-critical pod "metrics-server-7c9d469d74-zkndx" is not ready (metrics-server)

Validation Failed
W0730 14:08:20.056099    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending
Pod	kube-system/metrics-server-7c9d469d74-zkndx			system-cluster-critical pod "metrics-server-7c9d469d74-zkndx" is not ready (metrics-server)

Validation Failed
W0730 14:08:31.995049    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-0-81.eu-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending

Validation Failed
W0730 14:08:43.961595    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-0-81.eu-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending

Validation Failed
W0730 14:08:55.850314    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-0-81.eu-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending

Validation Failed
W0730 14:09:07.747667    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-0-81.eu-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-6579cdddfc-sdzjr	system-cluster-critical pod "aws-load-balancer-controller-6579cdddfc-sdzjr" is pending

Validation Failed
W0730 14:09:19.593171    6332 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 1074 lines ...
I0730 14:13:51.758030    6427 channel_version.go:140] manifest Match for "cluster-autoscaler.addons.k8s.io": Channel=s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/bootstrap-channel.yaml Id=k8s-1.15 ManifestHash=226c8f47e053dc2e79d203446a23a5925942f3fc39a99854b17cf2251881a11c SystemGeneration=1
NAME			CURRENT	UPDATE									PKI
networking.cilium.io	-	0d7e665f356f97c0137cda33453dbd5d40d5fbca30b62df8ef30e2db894a36a1	no
I0730 14:13:51.954938    6427 addon.go:188] Applying update from "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml"
I0730 14:13:51.955087    6427 s3fs.go:329] Reading file "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml"
I0730 14:13:52.062513    6427 apply.go:83] Running command: kubectl apply -f /tmp/channel3597764166/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 14:13:54.928455    6427 apply.go:86] error running kubectl apply -f /tmp/channel3597764166/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 14:13:54.928572    6427 apply.go:87] serviceaccount/cilium serverside-applied
serviceaccount/cilium-operator serverside-applied
serviceaccount/hubble-relay serverside-applied
configmap/cilium-config serverside-applied
configmap/hubble-relay-config serverside-applied
clusterrole.rbac.authorization.k8s.io/cilium serverside-applied
... skipping 5 lines ...
service/hubble-relay serverside-applied
daemonset.apps/cilium serverside-applied
deployment.apps/cilium-operator serverside-applied
certificate.cert-manager.io/hubble-server-certs serverside-applied
certificate.cert-manager.io/hubble-relay-client-certs serverside-applied
poddisruptionbudget.policy/cilium-operator serverside-applied
Error from server: failed to create typed patch object: .spec.template.spec.containers[name="hubble-relay"].ports: element 0: associative list with keys has an element that omits key field "protocol" (and doesn't have default value)
E0730 14:13:54.928618    6427 apply.go:52] failed to apply the manifest: error running kubectl: exit status 1
I0730 14:13:54.928721    6427 apply.go:83] Running command: kubectl replace -f /tmp/channel3597764166/manifest.yaml --field-manager=kops
I0730 14:13:59.081792    6427 apply.go:86] error running kubectl replace -f /tmp/channel3597764166/manifest.yaml --field-manager=kops
I0730 14:13:59.081843    6427 apply.go:87] serviceaccount/cilium replaced
serviceaccount/cilium-operator replaced
serviceaccount/hubble-relay replaced
configmap/cilium-config replaced
configmap/hubble-relay-config replaced
clusterrole.rbac.authorization.k8s.io/cilium replaced
... skipping 4 lines ...
clusterrolebinding.rbac.authorization.k8s.io/hubble-relay replaced
daemonset.apps/cilium replaced
deployment.apps/cilium-operator replaced
certificate.cert-manager.io/hubble-server-certs replaced
certificate.cert-manager.io/hubble-relay-client-certs replaced
poddisruptionbudget.policy/cilium-operator replaced
Error from server (Invalid): error when replacing "/tmp/channel3597764166/manifest.yaml": Service "hubble-relay" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error from server (NotFound): error when replacing "/tmp/channel3597764166/manifest.yaml": deployments.apps "hubble-relay" not found
E0730 14:13:59.081888    6427 apply.go:61] failed to replace manifest: error running kubectl: exit status 1
I0730 14:13:59.081976    6427 apply.go:83] Running command: kubectl apply -f /tmp/channel3597764166/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 14:14:01.508037    6427 apply.go:86] error running kubectl apply -f /tmp/channel3597764166/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 14:14:01.508091    6427 apply.go:87] serviceaccount/cilium serverside-applied
serviceaccount/cilium-operator serverside-applied
serviceaccount/hubble-relay serverside-applied
configmap/cilium-config serverside-applied
configmap/hubble-relay-config serverside-applied
clusterrole.rbac.authorization.k8s.io/cilium serverside-applied
... skipping 5 lines ...
service/hubble-relay serverside-applied
daemonset.apps/cilium serverside-applied
deployment.apps/cilium-operator serverside-applied
certificate.cert-manager.io/hubble-server-certs serverside-applied
certificate.cert-manager.io/hubble-relay-client-certs serverside-applied
poddisruptionbudget.policy/cilium-operator serverside-applied
Error from server: failed to create typed patch object: .spec.template.spec.containers[name="hubble-relay"].ports: element 0: associative list with keys has an element that omits key field "protocol" (and doesn't have default value)

updating "networking.cilium.io": error applying update from "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml": failed to apply the manifest: error running kubectl: exit status 1
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.kb3WmLfIZ --down
I0730 14:14:01.542939    6474 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 14:14:01.544192    6474 app.go:62] The files in RunDir shall not be part of Artifacts
I0730 14:14:01.544230    6474 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0730 14:14:01.544261    6474 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ecd4654c-100f-11ed-bfc8-0eb9b3896f8e"
... skipping 254 lines ...