This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-15 20:24
Elapsed46m52s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1015 20:25:24.054881    4971 dumplogs.go:40] /tmp/kops.6zkGHjsv0 toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 20:25:24.071178    4982 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 20:25:24.071251    4982 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 20:25:24.071256    4982 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1015 20:25:24.557049    4971 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1015 20:25:24.557116    4971 down.go:48] /tmp/kops.6zkGHjsv0 delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1015 20:25:24.570675    4992 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 20:25:24.570813    4992 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 20:25:24.570838    4992 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1015 20:25:25.050654    4971 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/15 20:25:25 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1015 20:25:25.058404    4971 http.go:37] curl https://ip.jsb.workers.dev
I1015 20:25:25.164740    4971 up.go:144] /tmp/kops.6zkGHjsv0 create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 34.134.108.92/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-1a --master-size c5.large
I1015 20:25:25.179262    5002 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 20:25:25.179324    5002 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 20:25:25.179327    5002 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 20:25:25.221336    5002 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1015 20:25:50.638256    4971 up.go:181] /tmp/kops.6zkGHjsv0 validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1015 20:25:50.651933    5023 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 20:25:50.651997    5023 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 20:25:50.652002    5023 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1015 20:25:51.711996    5023 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:26:01.744558    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:26:11.778632    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:26:21.814552    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:26:31.849982    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:26:41.881444    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:26:51.910964    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:27:01.942589    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:27:11.969227    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
W1015 20:27:21.987495    5023 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:27:32.023595    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:27:42.054012    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:27:52.085713    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:28:02.119011    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:28:12.152380    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:28:22.182728    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:28:32.244349    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:28:42.279034    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:28:52.308576    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:29:02.343856    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:29:12.372371    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:29:22.403244    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:29:32.437544    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 20:29:42.468701    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 11 lines ...
Pod	kube-system/calico-kube-controllers-6b59cd85f8-cw9h6	system-cluster-critical pod "calico-kube-controllers-6b59cd85f8-cw9h6" is pending
Pod	kube-system/calico-node-7prxw				system-node-critical pod "calico-node-7prxw" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-h5d8p			system-cluster-critical pod "coredns-5dc785954d-h5d8p" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-8kszl		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-8kszl" is pending
Pod	kube-system/kops-controller-gr877			system-node-critical pod "kops-controller-gr877" is pending

Validation Failed
W1015 20:29:54.203425    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 8 lines ...
Machine	i-0a637cf057db92f61					machine "i-0a637cf057db92f61" has not yet joined cluster
Machine	i-0d672a2371b943b82					machine "i-0d672a2371b943b82" has not yet joined cluster
Pod	kube-system/calico-kube-controllers-6b59cd85f8-cw9h6	system-cluster-critical pod "calico-kube-controllers-6b59cd85f8-cw9h6" is pending
Pod	kube-system/coredns-5dc785954d-h5d8p			system-cluster-critical pod "coredns-5dc785954d-h5d8p" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-8kszl		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-8kszl" is pending

Validation Failed
W1015 20:30:05.424199    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 9 lines ...
Machine	i-0d672a2371b943b82				machine "i-0d672a2371b943b82" has not yet joined cluster
Node	ip-172-20-38-11.us-west-1.compute.internal	node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready
Pod	kube-system/calico-node-wszsq			system-node-critical pod "calico-node-wszsq" is pending
Pod	kube-system/coredns-5dc785954d-h5d8p		system-cluster-critical pod "coredns-5dc785954d-h5d8p" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-8kszl	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-8kszl" is pending

Validation Failed
W1015 20:30:16.667255    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 16 lines ...
Pod	kube-system/calico-node-wszsq						system-node-critical pod "calico-node-wszsq" is pending
Pod	kube-system/coredns-5dc785954d-h5d8p					system-cluster-critical pod "coredns-5dc785954d-h5d8p" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-8kszl				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-8kszl" is pending
Pod	kube-system/kube-proxy-ip-172-20-50-114.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-50-114.us-west-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-63-79.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-63-79.us-west-1.compute.internal" is pending

Validation Failed
W1015 20:30:27.874010    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 14 lines ...
Pod	kube-system/calico-node-f262z			system-node-critical pod "calico-node-f262z" is pending
Pod	kube-system/calico-node-pc8fz			system-node-critical pod "calico-node-pc8fz" is pending
Pod	kube-system/calico-node-wszsq			system-node-critical pod "calico-node-wszsq" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-h5d8p		system-cluster-critical pod "coredns-5dc785954d-h5d8p" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-8kszl	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-8kszl" is pending

Validation Failed
W1015 20:30:39.094643    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-84chv	system-node-critical pod "calico-node-84chv" is not ready (calico-node)
Pod	kube-system/calico-node-pc8fz	system-node-critical pod "calico-node-pc8fz" is not ready (calico-node)
Pod	kube-system/calico-node-wszsq	system-node-critical pod "calico-node-wszsq" is not ready (calico-node)

Validation Failed
W1015 20:30:50.300964    5023 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 4381 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-j9pdg
I1015 20:35:30.122185    5145 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1015 20:35:35.122440    5145 instancegroups.go:417] deleting node "ip-172-20-62-249.us-west-1.compute.internal" from kubernetes
I1015 20:35:35.184409    5145 instancegroups.go:591] Stopping instance "i-0e92df717e6883f26", node "ip-172-20-62-249.us-west-1.compute.internal", in group "master-us-west-1a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1015 20:35:35.348277    5145 instancegroups.go:435] waiting for 15s after terminating instance
I1015 20:35:50.348923    5145 instancegroups.go:470] Validating the cluster.
I1015 20:36:20.380289    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:37:20.424934    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:38:20.470238    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:39:20.515699    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:40:20.545331    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:41:20.579030    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:42:20.624518    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:43:20.655216    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:44:20.688466    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:45:20.719167    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:46:20.753062    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:47:20.789166    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:48:20.824602    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:49:20.871747    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:50:20.910132    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:51:20.954939    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:52:20.989858    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:53:21.022052    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:54:21.053990    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:55:21.085736    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:56:21.123254    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:57:21.159115    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:58:21.201914    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 20:59:21.234129    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 21:00:21.266902    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 21:01:21.303485    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 21:02:21.351345    5145 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.177.111.140:443: i/o timeout.
I1015 21:02:53.030039    5145 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
I1015 21:03:24.137801    5145 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
I1015 21:03:55.323085    5145 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
I1015 21:04:26.532862    5145 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
I1015 21:04:57.895475    5145 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
I1015 21:05:29.197097    5145 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
I1015 21:06:00.377435    5145 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-61-25.us-west-1.compute.internal" of role "master" is not ready, node "ip-172-20-63-79.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-114.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-38-11.us-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-41-154.us-west-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-2sdlc" is pending, system-node-critical pod "calico-node-sj2sr" is pending, system-node-critical pod "ebs-csi-node-nqmt7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-61-25.us-west-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-61-25.us-west-1.compute.internal" is pending.
E1015 21:06:00.377477    5145 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.keuypiVQr --down
I1015 21:06:00.406535    5163 app.go:59] RunDir for this run: "/logs/artifacts/d897c2dc-2df5-11ec-a66e-da1c1b34387a"
I1015 21:06:00.406842    5163 app.go:90] ID for this run: "d897c2dc-2df5-11ec-a66e-da1c1b34387a"
I1015 21:06:00.406889    5163 dumplogs.go:40] /tmp/kops.keuypiVQr toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 21:06:00.423838    5171 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1506 lines ...