This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-17 14:25
Elapsed46m36s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1017 14:26:08.497658    4946 dumplogs.go:40] /tmp/kops.ZOUHSJG65 toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1017 14:26:08.517398    4957 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 14:26:08.517487    4957 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 14:26:08.517491    4957 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1017 14:26:09.037089    4946 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1017 14:26:09.037149    4946 down.go:48] /tmp/kops.ZOUHSJG65 delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1017 14:26:09.051331    4967 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 14:26:09.051405    4967 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 14:26:09.051410    4967 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1017 14:26:09.569846    4946 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/17 14:26:09 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1017 14:26:09.577256    4946 http.go:37] curl https://ip.jsb.workers.dev
I1017 14:26:09.689013    4946 up.go:144] /tmp/kops.ZOUHSJG65 create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 35.192.147.67/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I1017 14:26:09.705601    4976 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 14:26:09.705793    4976 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 14:26:09.705818    4976 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 14:26:09.759568    4976 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1017 14:26:35.556449    4946 up.go:181] /tmp/kops.ZOUHSJG65 validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1017 14:26:35.571013    4995 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 14:26:35.571092    4995 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 14:26:35.571096    4995 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1017 14:26:36.678173    4995 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:26:46.723628    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:26:56.769252    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:27:06.805362    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:27:16.843219    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:27:26.872620    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:27:36.904366    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
W1017 14:27:46.936913    4995 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:27:56.968743    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:28:06.999977    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:28:17.033275    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:28:27.063855    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:28:37.093333    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:28:47.125097    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:28:57.156430    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:29:07.193129    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
W1017 14:29:17.217047    4995 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:29:27.248857    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:29:37.280157    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:29:47.326650    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:29:57.355741    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:30:07.385642    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:30:17.415216    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:30:27.444533    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:30:37.474914    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 14:30:47.504364    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
W1017 14:31:27.566396    4995 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
... skipping 11 lines ...
Pod	kube-system/calico-node-h5sxc						system-node-critical pod "calico-node-h5sxc" is not ready (calico-node)
Pod	kube-system/calico-node-n55kv						system-node-critical pod "calico-node-n55kv" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-9zzg8					system-cluster-critical pod "coredns-5dc785954d-9zzg8" is pending
Pod	kube-system/coredns-5dc785954d-w57cf					system-cluster-critical pod "coredns-5dc785954d-w57cf" is not ready (coredns)
Pod	kube-system/kube-proxy-ip-172-20-43-244.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-43-244.us-west-2.compute.internal" is pending

Validation Failed
W1017 14:31:39.612633    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-43-244.us-west-2.compute.internal	node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/calico-node-cgz7x			system-node-critical pod "calico-node-cgz7x" is not ready (calico-node)

Validation Failed
W1017 14:31:50.955796    4995 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 806 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 6190 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-4ql7z
I1017 14:35:57.220551    5119 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1017 14:36:02.220791    5119 instancegroups.go:417] deleting node "ip-172-20-60-118.us-west-2.compute.internal" from kubernetes
I1017 14:36:02.293567    5119 instancegroups.go:591] Stopping instance "i-0742bd03668ec8cba", node "ip-172-20-60-118.us-west-2.compute.internal", in group "master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1017 14:36:02.484896    5119 instancegroups.go:435] waiting for 15s after terminating instance
I1017 14:36:17.485171    5119 instancegroups.go:470] Validating the cluster.
I1017 14:36:47.521319    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:37:47.555261    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:38:47.619049    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:39:47.655368    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:40:47.691702    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:41:47.740169    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:42:47.774882    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:43:47.806515    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:44:47.841305    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:45:47.879165    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:46:47.914067    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:47:47.942196    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:48:47.973618    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:49:48.007106    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:50:48.055487    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:51:48.094325    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:52:48.139685    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:53:48.172388    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:54:48.209416    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:55:48.249466    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:56:48.284513    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:57:48.321253    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:58:48.360799    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 14:59:48.393834    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 15:00:48.428214    5119 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.162.210.131:443: i/o timeout.
I1017 15:01:20.567933    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:01:51.985947    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:02:23.294990    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:02:54.769297    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:03:26.232876    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:03:57.781072    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:04:29.197882    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:05:00.603838    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:05:32.292704    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:06:03.810833    5119 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
I1017 15:06:35.425235    5119 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-49-108.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-43-244.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-33-162.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-149.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-57-141.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-59w5h" is pending, system-node-critical pod "calico-node-m9cww" is pending, system-node-critical pod "ebs-csi-node-lk8s6" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-49-108.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-49-108.us-west-2.compute.internal" is pending.
E1017 15:06:35.425270    5119 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.OXVO1ROa1 --down
I1017 15:06:35.456366    5137 app.go:59] RunDir for this run: "/logs/artifacts/f65361b3-2f55-11ec-8a05-6acfde499713"
I1017 15:06:35.456572    5137 app.go:90] ID for this run: "f65361b3-2f55-11ec-8a05-6acfde499713"
I1017 15:06:35.456601    5137 dumplogs.go:40] /tmp/kops.OXVO1ROa1 toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1017 15:06:35.474114    5145 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1470 lines ...