This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-17 02:25
Elapsed46m12s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1017 02:25:57.581322    4973 dumplogs.go:40] /tmp/kops.YogZYvZhs toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1017 02:25:57.596413    4982 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 02:25:57.596582    4982 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 02:25:57.596607    4982 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1017 02:25:58.087492    4973 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1017 02:25:58.087563    4973 down.go:48] /tmp/kops.YogZYvZhs delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1017 02:25:58.101884    4992 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 02:25:58.101960    4992 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 02:25:58.101965    4992 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1017 02:25:58.596696    4973 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/17 02:25:58 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1017 02:25:58.604138    4973 http.go:37] curl https://ip.jsb.workers.dev
I1017 02:25:58.676641    4973 up.go:144] /tmp/kops.YogZYvZhs create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 104.154.76.82/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-1a --master-size c5.large
I1017 02:25:58.690837    5003 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 02:25:58.690909    5003 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 02:25:58.690913    5003 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 02:25:58.739312    5003 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1017 02:26:26.563677    4973 up.go:181] /tmp/kops.YogZYvZhs validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1017 02:26:26.579662    5023 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 02:26:26.579757    5023 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 02:26:26.579762    5023 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1017 02:26:27.554276    5023 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:26:37.589054    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:26:47.620468    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:26:57.665565    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:27:07.695901    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:27:17.758746    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:27:27.792149    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:27:37.836990    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:27:47.871143    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:27:57.900337    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:28:07.929693    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:28:17.961197    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:28:28.004246    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:28:38.029852    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:28:48.057926    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:28:58.091891    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:29:08.125667    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:29:18.160386    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:29:28.206348    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:29:38.235518    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:29:48.270299    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:29:58.301007    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:30:08.331580    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:30:18.362507    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:30:28.391557    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:30:38.456228    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 02:30:48.491452    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
W1017 02:30:58.521991    5023 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
... skipping 9 lines ...
Node	ip-172-20-60-35.ec2.internal				node "ip-172-20-60-35.ec2.internal" of role "node" is not ready
Pod	kube-system/calico-node-bjjt8				system-node-critical pod "calico-node-bjjt8" is pending
Pod	kube-system/coredns-5dc785954d-q6rq4			system-cluster-critical pod "coredns-5dc785954d-q6rq4" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ttvpd		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ttvpd" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-35.ec2.internal	system-node-critical pod "kube-proxy-ip-172-20-60-35.ec2.internal" is pending

Validation Failed
W1017 02:31:09.914034    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 17 lines ...
Pod	kube-system/calico-node-f6fgx				system-node-critical pod "calico-node-f6fgx" is pending
Pod	kube-system/coredns-5dc785954d-q6rq4			system-cluster-critical pod "coredns-5dc785954d-q6rq4" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ttvpd		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ttvpd" is pending
Pod	kube-system/kube-proxy-ip-172-20-47-239.ec2.internal	system-node-critical pod "kube-proxy-ip-172-20-47-239.ec2.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-226.ec2.internal	system-node-critical pod "kube-proxy-ip-172-20-60-226.ec2.internal" is pending

Validation Failed
W1017 02:31:21.221678    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 14 lines ...
Pod	kube-system/calico-node-bjjt8			system-node-critical pod "calico-node-bjjt8" is not ready (calico-node)
Pod	kube-system/calico-node-bv965			system-node-critical pod "calico-node-bv965" is not ready (calico-node)
Pod	kube-system/calico-node-f6fgx			system-node-critical pod "calico-node-f6fgx" is pending
Pod	kube-system/coredns-5dc785954d-dxwqc		system-cluster-critical pod "coredns-5dc785954d-dxwqc" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-ttvpd	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-ttvpd" is pending

Validation Failed
W1017 02:31:32.388696    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 6 lines ...
ip-172-20-60-35.ec2.internal	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-bv965	system-node-critical pod "calico-node-bv965" is not ready (calico-node)

Validation Failed
W1017 02:31:44.449281    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 301 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 5756 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-zpk4p
I1017 02:35:14.074242    5146 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1017 02:35:19.074883    5146 instancegroups.go:417] deleting node "ip-172-20-43-3.ec2.internal" from kubernetes
I1017 02:35:19.106824    5146 instancegroups.go:591] Stopping instance "i-04296e852260c7748", node "ip-172-20-43-3.ec2.internal", in group "master-us-east-1a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1017 02:35:19.259027    5146 instancegroups.go:435] waiting for 15s after terminating instance
I1017 02:35:34.259169    5146 instancegroups.go:470] Validating the cluster.
I1017 02:36:04.288421    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:37:04.322246    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:38:04.356115    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:39:04.386272    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:40:04.418382    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:41:04.450280    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:42:04.480365    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:43:04.526744    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:44:04.572100    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:45:04.601101    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:46:04.649746    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:47:04.681030    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:48:04.710826    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:49:04.746408    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:50:04.781836    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:51:04.811646    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:52:04.842441    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:53:04.872805    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:54:04.903490    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:55:04.933316    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:56:04.961371    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:57:04.998933    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:58:05.033676    5146 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 184.72.89.20:443: i/o timeout.
I1017 02:58:36.405264    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 02:59:07.551791    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 02:59:38.735663    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:00:09.928689    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:00:40.969416    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:01:12.314438    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
... skipping 4 lines ...
I1017 03:03:47.969018    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:04:18.949165    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:04:49.992178    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:05:21.117833    5146 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
I1017 03:05:52.127776    5146 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-47-223.ec2.internal" of role "master" is not ready, node "ip-172-20-60-226.ec2.internal" of role "node" is not ready, node "ip-172-20-60-35.ec2.internal" of role "node" is not ready, node "ip-172-20-32-142.ec2.internal" of role "node" is not ready, node "ip-172-20-47-239.ec2.internal" of role "node" is not ready, system-node-critical pod "calico-node-2bfw4" is pending, system-node-critical pod "calico-node-q6r5p" is not ready (calico-node), system-node-critical pod "ebs-csi-node-74wkz" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-47-223.ec2.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-47-223.ec2.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-47-223.ec2.internal" is pending.
E1017 03:05:52.127816    5146 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.oAO2pTzGD --down
I1017 03:05:52.155785    5164 app.go:59] RunDir for this run: "/logs/artifacts/60db1499-2ef1-11ec-8a05-6acfde499713"
I1017 03:05:52.155940    5164 app.go:90] ID for this run: "60db1499-2ef1-11ec-8a05-6acfde499713"
I1017 03:05:52.155967    5164 dumplogs.go:40] /tmp/kops.oAO2pTzGD toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1017 03:05:52.173535    5173 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1051 lines ...
I1017 03:06:11.248719    5164 dumplogs.go:72] /tmp/kops.oAO2pTzGD get cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1017 03:06:11.782208    5164 dumplogs.go:72] /tmp/kops.oAO2pTzGD get instancegroups --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1017 03:06:12.641854    5164 dumplogs.go:91] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1017 03:07:12.934784    5164 dumplogs.go:114] /tmp/kops.oAO2pTzGD toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu -o yaml
I1017 03:07:19.395750    5164 dumplogs.go:143] ssh -i /etc/aws-ssh/aws-ssh-private -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@54.227.83.209 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
Warning: Permanently added '54.227.83.209' (ECDSA) to the list of known hosts.
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get events)
W1017 03:08:20.284203    5164 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1017 03:08:20.284321    5164 down.go:48] /tmp/kops.oAO2pTzGD delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1017 03:08:20.301345    5224 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1017 03:08:20.301436    5224 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1017 03:08:20.301441    5224 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-us-east-1a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io					lt-09e41f3ee53b6a867
... skipping 400 lines ...