This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-15 16:25
Elapsed47m25s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1015 16:27:03.655913    5090 dumplogs.go:40] /tmp/kops.EYu8SsTPZ toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 16:27:03.689109    5100 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 16:27:03.689217    5100 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 16:27:03.689222    5100 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1015 16:27:04.238060    5090 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1015 16:27:04.238176    5090 down.go:48] /tmp/kops.EYu8SsTPZ delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1015 16:27:04.272204    5111 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 16:27:04.274046    5111 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 16:27:04.274090    5111 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1015 16:27:04.814003    5090 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/15 16:27:04 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1015 16:27:04.825478    5090 http.go:37] curl https://ip.jsb.workers.dev
I1015 16:27:04.953466    5090 up.go:144] /tmp/kops.EYu8SsTPZ create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 35.226.125.6/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-2a --master-size c5.large
I1015 16:27:04.987781    5122 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 16:27:04.987891    5122 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 16:27:04.987899    5122 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 16:27:05.053032    5122 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1015 16:27:29.679716    5090 up.go:181] /tmp/kops.EYu8SsTPZ validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1015 16:27:29.724958    5141 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 16:27:29.725083    5141 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 16:27:29.725088    5141 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1015 16:27:30.778554    5141 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1015 16:27:40.824925    5141 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:27:50.861515    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:28:00.893731    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
W1015 16:28:10.924352    5141 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:28:20.970622    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:28:31.025835    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:28:41.062411    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:28:51.100346    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:29:01.131215    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:29:11.174248    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:29:21.207770    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:29:31.239249    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
W1015 16:29:41.268805    5141 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:29:51.304532    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:30:01.348752    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:30:11.385321    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:30:21.428375    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:30:31.465190    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:30:41.503407    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:30:51.539587    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:31:01.587928    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:31:11.635040    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:31:21.671082    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:31:31.710548    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
W1015 16:31:41.737764    5141 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:31:51.770095    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:32:01.804131    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:32:11.843395    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:32:21.876671    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:32:31.908946    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:32:41.943751    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 16:32:51.982934    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 12 lines ...
Pod	kube-system/calico-node-pgdqc						system-node-critical pod "calico-node-pgdqc" is pending
Pod	kube-system/coredns-5dc785954d-5s44l					system-cluster-critical pod "coredns-5dc785954d-5s44l" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-xvzpz				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-xvzpz" is pending
Pod	kube-system/kube-proxy-ip-172-20-40-16.us-east-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-40-16.us-east-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-56-161.us-east-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-56-161.us-east-2.compute.internal" is pending

Validation Failed
W1015 16:33:03.133447    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 12 lines ...
Pod	kube-system/calico-node-pgdqc						system-node-critical pod "calico-node-pgdqc" is not ready (calico-node)
Pod	kube-system/calico-node-qkb5t						system-node-critical pod "calico-node-qkb5t" is pending
Pod	kube-system/coredns-5dc785954d-5s44l					system-cluster-critical pod "coredns-5dc785954d-5s44l" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-xvzpz				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-xvzpz" is pending
Pod	kube-system/kube-proxy-ip-172-20-34-213.us-east-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-34-213.us-east-2.compute.internal" is pending

Validation Failed
W1015 16:33:13.918137    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 10 lines ...
Pod	kube-system/calico-node-7vgks			system-node-critical pod "calico-node-7vgks" is not ready (calico-node)
Pod	kube-system/calico-node-pgdqc			system-node-critical pod "calico-node-pgdqc" is not ready (calico-node)
Pod	kube-system/calico-node-qkb5t			system-node-critical pod "calico-node-qkb5t" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-5s44l		system-cluster-critical pod "coredns-5dc785954d-5s44l" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-xvzpz	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-xvzpz" is pending

Validation Failed
W1015 16:33:24.778129    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 12 lines ...
Pod	kube-system/calico-node-kr8sx			system-node-critical pod "calico-node-kr8sx" is pending
Pod	kube-system/calico-node-pgdqc			system-node-critical pod "calico-node-pgdqc" is not ready (calico-node)
Pod	kube-system/calico-node-qkb5t			system-node-critical pod "calico-node-qkb5t" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-5s44l		system-cluster-critical pod "coredns-5dc785954d-5s44l" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-xvzpz	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-xvzpz" is pending

Validation Failed
W1015 16:33:35.580622    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 11 lines ...
Pod	kube-system/calico-node-kr8sx			system-node-critical pod "calico-node-kr8sx" is not ready (calico-node)
Pod	kube-system/calico-node-pgdqc			system-node-critical pod "calico-node-pgdqc" is not ready (calico-node)
Pod	kube-system/calico-node-qkb5t			system-node-critical pod "calico-node-qkb5t" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-5s44l		system-cluster-critical pod "coredns-5dc785954d-5s44l" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-xvzpz	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-xvzpz" is pending

Validation Failed
W1015 16:33:46.339096    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/calico-node-pgdqc		system-node-critical pod "calico-node-pgdqc" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-5s44l	system-cluster-critical pod "coredns-5dc785954d-5s44l" is pending

Validation Failed
W1015 16:33:57.176476    5141 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 806 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 301 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 6190 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-bxx66
I1015 16:37:53.509783    5264 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1015 16:37:58.511976    5264 instancegroups.go:417] deleting node "ip-172-20-56-151.us-east-2.compute.internal" from kubernetes
I1015 16:37:58.534019    5264 instancegroups.go:591] Stopping instance "i-0b24dacd789b35a24", node "ip-172-20-56-151.us-east-2.compute.internal", in group "master-us-east-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1015 16:37:58.657130    5264 instancegroups.go:435] waiting for 15s after terminating instance
I1015 16:38:13.657396    5264 instancegroups.go:470] Validating the cluster.
I1015 16:38:43.695244    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:39:43.732417    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:40:43.776939    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:41:43.828680    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:42:43.865547    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:43:43.899600    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:44:43.936510    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:45:43.988351    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:46:44.058769    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:47:44.108627    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:48:44.145228    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:49:44.194123    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:50:44.236762    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:51:44.285919    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:52:44.318442    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:53:44.370172    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:54:44.411994    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:55:44.456337    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:56:44.491748    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:57:44.531417    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:58:44.566689    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 16:59:44.622020    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 17:00:44.674964    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 17:01:44.711387    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 17:02:44.756687    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 17:03:44.812970    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 17:04:44.865264    5264 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.216.84.173:443: i/o timeout.
I1015 17:05:16.013544    5264 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
I1015 17:05:46.805708    5264 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
I1015 17:06:17.589945    5264 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
I1015 17:06:48.384842    5264 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
I1015 17:07:19.237432    5264 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
I1015 17:07:50.057129    5264 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
I1015 17:08:20.953353    5264 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-48-76.us-east-2.compute.internal" of role "master" is not ready, node "ip-172-20-40-16.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-34-213.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-56-161.us-east-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-56.us-east-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-8rv2n" is pending, system-node-critical pod "calico-node-znjtf" is not ready (calico-node), system-node-critical pod "ebs-csi-node-hrrbs" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-48-76.us-east-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-48-76.us-east-2.compute.internal" is pending.
E1015 17:08:20.953400    5264 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.xiNfmzPQP --down
I1015 17:08:20.986707    5282 app.go:59] RunDir for this run: "/logs/artifacts/6bd8e8d4-2dd4-11ec-a9c0-767ea6e1a48e"
I1015 17:08:20.986877    5282 app.go:90] ID for this run: "6bd8e8d4-2dd4-11ec-a9c0-767ea6e1a48e"
I1015 17:08:20.986901    5282 dumplogs.go:40] /tmp/kops.xiNfmzPQP toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 17:08:21.007967    5290 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1472 lines ...