This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-07-30 02:01
Elapsed19m13s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 212 lines ...
I0730 02:02:39.402743    6219 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0730 02:02:39.402778    6219 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/67584823-0fab-11ed-b6e4-d28c295d3c4e"
I0730 02:02:39.408825    6219 app.go:129] ID for this run: "67584823-0fab-11ed-b6e4-d28c295d3c4e"
I0730 02:02:39.409217    6219 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519
I0730 02:02:39.418666    6219 dumplogs.go:45] /tmp/kops.ec4BNgIVm toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0730 02:02:39.418712    6219 local.go:42] ⚙️ /tmp/kops.ec4BNgIVm toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0730 02:02:39.948503    6219 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0730 02:02:39.948563    6219 down.go:48] /tmp/kops.ec4BNgIVm delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 02:02:39.948581    6219 local.go:42] ⚙️ /tmp/kops.ec4BNgIVm delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 02:02:39.970985    6239 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 02:02:39.971094    6239 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ v == \v ]]
+ KOPS_BASE_URL=
++ kops-download-release v1.24.0
++ local kops
+++ mktemp -t kops.XXXXXXXXX
++ kops=/tmp/kops.OZzqoJezl
... skipping 10 lines ...
I0730 02:02:43.642956    6274 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0730 02:02:43.642987    6274 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/67584823-0fab-11ed-b6e4-d28c295d3c4e"
I0730 02:02:43.668776    6274 app.go:129] ID for this run: "67584823-0fab-11ed-b6e4-d28c295d3c4e"
I0730 02:02:43.668863    6274 up.go:44] Cleaning up any leaked resources from previous cluster
I0730 02:02:43.668902    6274 dumplogs.go:45] /tmp/kops.OZzqoJezl toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0730 02:02:43.668946    6274 local.go:42] ⚙️ /tmp/kops.OZzqoJezl toolbox dump --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0730 02:02:44.156210    6274 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0730 02:02:44.156256    6274 down.go:48] /tmp/kops.OZzqoJezl delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 02:02:44.156267    6274 local.go:42] ⚙️ /tmp/kops.OZzqoJezl delete cluster --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --yes
I0730 02:02:44.177925    6296 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 02:02:44.178028    6296 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io" not found
I0730 02:02:44.661741    6274 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/07/30 02:02:44 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0730 02:02:44.677113    6274 http.go:37] curl https://ip.jsb.workers.dev
I0730 02:02:44.798932    6274 template.go:58] /tmp/kops.OZzqoJezl toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template3864377186/manifest.yaml --values /tmp/kops-template3864377186/values.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
I0730 02:02:44.798973    6274 local.go:42] ⚙️ /tmp/kops.OZzqoJezl toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template3864377186/manifest.yaml --values /tmp/kops-template3864377186/values.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
I0730 02:02:44.823605    6306 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 02:02:44.823791    6306 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 02:02:45.005988    6274 create.go:33] /tmp/kops.OZzqoJezl create --filename /tmp/kops-template3864377186/manifest.yaml --name e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io
... skipping 66 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:03:31.954080    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:03:42.004742    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:03:52.039504    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:04:02.091802    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:04:12.142339    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:04:22.182430    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:04:32.224983    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:04:42.268462    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:04:52.324378    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:05:02.361842    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:05:12.403753    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:05:22.449508    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:05:32.488305    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:05:42.541174    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:05:52.579841    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:06:02.619726    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:06:12.656134    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:06:22.701406    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:06:32.744133    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:06:42.785465    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:06:52.829660    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:07:02.871036    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:07:12.913059    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:07:22.963086    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:07:32.995925    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:07:43.033578    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:07:53.072784    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:08:03.113562    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:08:13.212143    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:08:23.248789    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:08:33.292295    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0730 02:08:43.328805    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
W0730 02:09:23.371091    6347 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
... skipping 17 lines ...
Pod	kube-system/coredns-78cd66cbc9-pzszv				system-cluster-critical pod "coredns-78cd66cbc9-pzszv" is not ready (coredns)
Pod	kube-system/coredns-autoscaler-6d96c59bbf-m6sgr			system-cluster-critical pod "coredns-autoscaler-6d96c59bbf-m6sgr" is pending
Pod	kube-system/metrics-server-7c9d469d74-5sxz9			system-cluster-critical pod "metrics-server-7c9d469d74-5sxz9" is pending
Pod	kube-system/metrics-server-7c9d469d74-9gchr			system-cluster-critical pod "metrics-server-7c9d469d74-9gchr" is not ready (metrics-server)
Pod	kube-system/node-local-dns-mlnjh				system-node-critical pod "node-local-dns-mlnjh" is pending

Validation Failed
W0730 02:09:36.990674    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 11 lines ...
Pod	kube-system/cert-manager-webhook-9b4bf487-j2pq5			system-cluster-critical pod "cert-manager-webhook-9b4bf487-j2pq5" is not ready (cert-manager)
Pod	kube-system/cilium-69zkc					system-node-critical pod "cilium-69zkc" is not ready (cilium-agent)
Pod	kube-system/coredns-78cd66cbc9-mnjhv				system-cluster-critical pod "coredns-78cd66cbc9-mnjhv" is not ready (coredns)
Pod	kube-system/metrics-server-7c9d469d74-5sxz9			system-cluster-critical pod "metrics-server-7c9d469d74-5sxz9" is not ready (metrics-server)
Pod	kube-system/metrics-server-7c9d469d74-9gchr			system-cluster-critical pod "metrics-server-7c9d469d74-9gchr" is not ready (metrics-server)

Validation Failed
W0730 02:09:49.546166    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending
Pod	kube-system/metrics-server-7c9d469d74-5sxz9			system-cluster-critical pod "metrics-server-7c9d469d74-5sxz9" is not ready (metrics-server)

Validation Failed
W0730 02:10:02.053794    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:10:14.543073    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:10:27.318094    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:10:39.791558    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:10:52.232907    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:11:04.915158    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:11:17.372386    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:11:29.990106    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:11:42.522981    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:11:55.089413    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:12:07.604154    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:12:20.284589    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-0-55.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME								MESSAGE
Pod	kube-system/aws-load-balancer-controller-77479469f8-pgnz6	system-cluster-critical pod "aws-load-balancer-controller-77479469f8-pgnz6" is pending

Validation Failed
W0730 02:12:32.842550    6347 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 1074 lines ...
I0730 02:17:19.551146    6443 channel_version.go:140] manifest Match for "certmanager.io": Channel=s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/bootstrap-channel.yaml Id=k8s-1.16 ManifestHash=2b1fc2ca18c196c00252385cd9b7dc3f7f6dbb0c669089f9b9e1962279a433c4 SystemGeneration=1
NAME			CURRENT	UPDATE									PKI
networking.cilium.io	-	0d7e665f356f97c0137cda33453dbd5d40d5fbca30b62df8ef30e2db894a36a1	no
I0730 02:17:19.840209    6443 addon.go:188] Applying update from "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml"
I0730 02:17:19.840337    6443 s3fs.go:329] Reading file "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml"
I0730 02:17:19.951065    6443 apply.go:83] Running command: kubectl apply -f /tmp/channel4249877528/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 02:17:23.846685    6443 apply.go:86] error running kubectl apply -f /tmp/channel4249877528/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 02:17:23.846763    6443 apply.go:87] serviceaccount/cilium serverside-applied
serviceaccount/cilium-operator serverside-applied
serviceaccount/hubble-relay serverside-applied
configmap/cilium-config serverside-applied
configmap/hubble-relay-config serverside-applied
clusterrole.rbac.authorization.k8s.io/cilium serverside-applied
... skipping 5 lines ...
service/hubble-relay serverside-applied
daemonset.apps/cilium serverside-applied
deployment.apps/cilium-operator serverside-applied
certificate.cert-manager.io/hubble-server-certs serverside-applied
certificate.cert-manager.io/hubble-relay-client-certs serverside-applied
poddisruptionbudget.policy/cilium-operator serverside-applied
Error from server: failed to create typed patch object: .spec.template.spec.containers[name="hubble-relay"].ports: element 0: associative list with keys has an element that omits key field "protocol" (and doesn't have default value)
E0730 02:17:23.846793    6443 apply.go:52] failed to apply the manifest: error running kubectl: exit status 1
I0730 02:17:23.846871    6443 apply.go:83] Running command: kubectl replace -f /tmp/channel4249877528/manifest.yaml --field-manager=kops
I0730 02:17:29.782254    6443 apply.go:86] error running kubectl replace -f /tmp/channel4249877528/manifest.yaml --field-manager=kops
I0730 02:17:29.782329    6443 apply.go:87] serviceaccount/cilium replaced
serviceaccount/cilium-operator replaced
serviceaccount/hubble-relay replaced
configmap/cilium-config replaced
configmap/hubble-relay-config replaced
clusterrole.rbac.authorization.k8s.io/cilium replaced
... skipping 4 lines ...
clusterrolebinding.rbac.authorization.k8s.io/hubble-relay replaced
daemonset.apps/cilium replaced
deployment.apps/cilium-operator replaced
certificate.cert-manager.io/hubble-server-certs replaced
certificate.cert-manager.io/hubble-relay-client-certs replaced
poddisruptionbudget.policy/cilium-operator replaced
Error from server (Invalid): error when replacing "/tmp/channel4249877528/manifest.yaml": Service "hubble-relay" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error from server (NotFound): error when replacing "/tmp/channel4249877528/manifest.yaml": deployments.apps "hubble-relay" not found
E0730 02:17:29.782350    6443 apply.go:61] failed to replace manifest: error running kubectl: exit status 1
I0730 02:17:29.782426    6443 apply.go:83] Running command: kubectl apply -f /tmp/channel4249877528/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 02:17:33.163001    6443 apply.go:86] error running kubectl apply -f /tmp/channel4249877528/manifest.yaml --server-side --force-conflicts --field-manager=kops
I0730 02:17:33.163051    6443 apply.go:87] serviceaccount/cilium serverside-applied
serviceaccount/cilium-operator serverside-applied
serviceaccount/hubble-relay serverside-applied
configmap/cilium-config serverside-applied
configmap/hubble-relay-config serverside-applied
clusterrole.rbac.authorization.k8s.io/cilium serverside-applied
... skipping 5 lines ...
service/hubble-relay serverside-applied
daemonset.apps/cilium serverside-applied
deployment.apps/cilium-operator serverside-applied
certificate.cert-manager.io/hubble-server-certs serverside-applied
certificate.cert-manager.io/hubble-relay-client-certs serverside-applied
poddisruptionbudget.policy/cilium-operator serverside-applied
Error from server: failed to create typed patch object: .spec.template.spec.containers[name="hubble-relay"].ports: element 0: associative list with keys has an element that omits key field "protocol" (and doesn't have default value)

updating "networking.cilium.io": error applying update from "s3://k8s-kops-prow/e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io/addons/networking.cilium.io/k8s-1.16-v1.11.yaml": failed to apply the manifest: error running kubectl: exit status 1
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-414ec24bfc-41f44.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.ec4BNgIVm --down
I0730 02:17:33.192866    6490 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0730 02:17:33.194097    6490 app.go:62] The files in RunDir shall not be part of Artifacts
I0730 02:17:33.194125    6490 app.go:63] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0730 02:17:33.194150    6490 app.go:65] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/67584823-0fab-11ed-b6e4-d28c295d3c4e"
... skipping 258 lines ...