This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-18 10:25
Elapsed46m45s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1018 10:26:25.635844    4892 dumplogs.go:40] /tmp/kops.rnmMMnlC5 toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1018 10:26:25.655460    4903 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1018 10:26:25.655541    4903 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1018 10:26:25.655545    4903 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1018 10:26:26.215022    4892 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1018 10:26:26.215210    4892 down.go:48] /tmp/kops.rnmMMnlC5 delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1018 10:26:26.234011    4913 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1018 10:26:26.234134    4913 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1018 10:26:26.234143    4913 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1018 10:26:26.744209    4892 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/18 10:26:26 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1018 10:26:26.751416    4892 http.go:37] curl https://ip.jsb.workers.dev
I1018 10:26:26.865467    4892 up.go:144] /tmp/kops.rnmMMnlC5 create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 35.192.147.67/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I1018 10:26:26.886445    4923 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1018 10:26:26.886535    4923 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1018 10:26:26.886540    4923 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1018 10:26:26.934105    4923 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1018 10:26:53.133503    4892 up.go:181] /tmp/kops.rnmMMnlC5 validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1018 10:26:53.157049    4941 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1018 10:26:53.157265    4941 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1018 10:26:53.157305    4941 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1018 10:26:54.487731    4941 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1018 10:27:04.516349    4941 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:27:14.564833    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:27:24.600596    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:27:34.646488    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:27:44.678674    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:27:54.711986    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:28:04.743937    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:28:14.776217    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:28:24.813684    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:28:34.849659    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:28:44.894895    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:28:54.939273    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:29:04.976213    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:29:15.006772    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:29:25.042430    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:29:35.109777    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:29:45.142661    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:29:55.173205    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:30:05.209879    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:30:15.245815    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:30:25.287917    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...
Machine	i-0202dee931fb56593				machine "i-0202dee931fb56593" has not yet joined cluster
Machine	i-0b830e5dae6ae1058				machine "i-0b830e5dae6ae1058" has not yet joined cluster
Machine	i-0c9071f19821b6ae8				machine "i-0c9071f19821b6ae8" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-gqdr4		system-cluster-critical pod "coredns-5dc785954d-gqdr4" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-f72jf	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-f72jf" is pending

Validation Failed
W1018 10:30:38.092382    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 8 lines ...
Machine	i-0b830e5dae6ae1058				machine "i-0b830e5dae6ae1058" has not yet joined cluster
Machine	i-0c9071f19821b6ae8				machine "i-0c9071f19821b6ae8" has not yet joined cluster
Pod	kube-system/calico-node-6t64l			system-node-critical pod "calico-node-6t64l" is pending
Pod	kube-system/coredns-5dc785954d-gqdr4		system-cluster-critical pod "coredns-5dc785954d-gqdr4" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-f72jf	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-f72jf" is pending

Validation Failed
W1018 10:30:50.029197    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 15 lines ...
Pod	kube-system/calico-node-85sj4						system-node-critical pod "calico-node-85sj4" is pending
Pod	kube-system/calico-node-wcbct						system-node-critical pod "calico-node-wcbct" is pending
Pod	kube-system/coredns-5dc785954d-gqdr4					system-cluster-critical pod "coredns-5dc785954d-gqdr4" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-f72jf				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-f72jf" is pending
Pod	kube-system/kube-proxy-ip-172-20-42-140.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-42-140.eu-west-2.compute.internal" is pending

Validation Failed
W1018 10:31:02.015345    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 14 lines ...
Pod	kube-system/calico-node-6t64l			system-node-critical pod "calico-node-6t64l" is not ready (calico-node)
Pod	kube-system/calico-node-85sj4			system-node-critical pod "calico-node-85sj4" is pending
Pod	kube-system/calico-node-wcbct			system-node-critical pod "calico-node-wcbct" is pending
Pod	kube-system/coredns-5dc785954d-gqdr4		system-cluster-critical pod "coredns-5dc785954d-gqdr4" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-f72jf	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-f72jf" is pending

Validation Failed
W1018 10:31:13.943700    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 10 lines ...
Pod	kube-system/calico-node-4fbf8		system-node-critical pod "calico-node-4fbf8" is not ready (calico-node)
Pod	kube-system/calico-node-6t64l		system-node-critical pod "calico-node-6t64l" is not ready (calico-node)
Pod	kube-system/calico-node-85sj4		system-node-critical pod "calico-node-85sj4" is not ready (calico-node)
Pod	kube-system/calico-node-wcbct		system-node-critical pod "calico-node-wcbct" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-gqdr4	system-cluster-critical pod "coredns-5dc785954d-gqdr4" is pending

Validation Failed
W1018 10:31:25.878902    4941 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 6190 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-9mgmp
I1018 10:35:47.377309    5069 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1018 10:35:52.379837    5069 instancegroups.go:417] deleting node "ip-172-20-54-30.eu-west-2.compute.internal" from kubernetes
I1018 10:35:52.477701    5069 instancegroups.go:591] Stopping instance "i-03cd8ca7ba6258ca0", node "ip-172-20-54-30.eu-west-2.compute.internal", in group "master-eu-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1018 10:35:52.685144    5069 instancegroups.go:435] waiting for 15s after terminating instance
I1018 10:36:07.685472    5069 instancegroups.go:470] Validating the cluster.
I1018 10:36:37.737298    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:37:37.781297    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:38:37.818480    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:39:37.868633    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:40:37.897635    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:41:37.947458    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:42:37.984708    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:43:38.026699    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:44:38.062513    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:45:38.101373    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:46:38.137316    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:47:38.170395    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:48:38.202444    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:49:38.240027    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:50:38.282864    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:51:38.315501    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:52:38.364420    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:53:38.411361    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:54:38.448269    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:55:38.509625    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:56:38.555436    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:57:38.590943    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:58:38.644693    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 10:59:38.698002    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 11:00:38.731300    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 11:01:38.769339    5069 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.182.249:443: i/o timeout.
I1018 11:02:11.426825    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:02:43.352897    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:03:15.215922    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:03:47.149647    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:04:18.968898    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:04:50.994816    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:05:22.894170    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:05:54.753688    5069 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
I1018 11:06:26.883787    5069 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-42-244.eu-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-36-98.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-52.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-42-140.eu-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-61-91.eu-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-gksxd" is pending, system-node-critical pod "calico-node-mtsvn" is not ready (calico-node), system-node-critical pod "ebs-csi-node-nm6m7" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-42-244.eu-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-42-244.eu-west-2.compute.internal" is pending.
E1018 11:06:26.883977    5069 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.n1ykM2uvn --down
I1018 11:06:26.912759    5090 app.go:59] RunDir for this run: "/logs/artifacts/9a2666d8-2ffd-11ec-8a05-6acfde499713"
I1018 11:06:26.912937    5090 app.go:90] ID for this run: "9a2666d8-2ffd-11ec-8a05-6acfde499713"
I1018 11:06:26.913007    5090 dumplogs.go:40] /tmp/kops.n1ykM2uvn toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1018 11:06:26.931327    5097 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1478 lines ...