This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-15 18:24
Elapsed47m53s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1015 18:25:39.135957    4979 dumplogs.go:40] /tmp/kops.DW36WHaTa toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 18:25:39.151010    4989 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 18:25:39.151070    4989 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 18:25:39.151074    4989 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1015 18:25:39.669508    4979 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1015 18:25:39.669542    4979 down.go:48] /tmp/kops.DW36WHaTa delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1015 18:25:39.683690    4999 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 18:25:39.683753    4999 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 18:25:39.683758    4999 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1015 18:25:40.186263    4979 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/15 18:25:40 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1015 18:25:40.193770    4979 http.go:37] curl https://ip.jsb.workers.dev
I1015 18:25:40.302716    4979 up.go:144] /tmp/kops.DW36WHaTa create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 35.222.94.235/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I1015 18:25:40.318135    5009 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 18:25:40.318281    5009 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 18:25:40.318289    5009 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 18:25:40.359952    5009 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1015 18:26:07.188409    4979 up.go:181] /tmp/kops.DW36WHaTa validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1015 18:26:07.204012    5028 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 18:26:07.204116    5028 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 18:26:07.204120    5028 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1015 18:26:08.419018    5028 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1015 18:26:18.454105    5028 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:26:28.508076    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:26:38.537532    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:26:48.567504    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:26:58.603023    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:27:08.648075    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:27:18.693937    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:27:28.728757    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:27:38.758285    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:27:48.791281    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:27:58.819945    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:28:08.852263    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:28:18.894821    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:28:28.922681    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:28:38.990071    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:28:49.026447    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
W1015 18:28:59.071214    5028 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:29:09.100337    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:29:19.127841    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:29:29.157080    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:29:39.189031    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:29:49.220385    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:29:59.252152    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:30:09.316199    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:30:19.367680    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 18:30:29.416330    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 9 lines ...
Machine	i-0ee4b39df9fbba558				machine "i-0ee4b39df9fbba558" has not yet joined cluster
Node	ip-172-20-47-147.eu-west-3.compute.internal	master "ip-172-20-47-147.eu-west-3.compute.internal" is missing kube-scheduler pod
Pod	kube-system/calico-node-472sq			system-node-critical pod "calico-node-472sq" is pending
Pod	kube-system/coredns-5dc785954d-l7bgq		system-cluster-critical pod "coredns-5dc785954d-l7bgq" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-fn69k	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-fn69k" is pending

Validation Failed
W1015 18:30:42.368195    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 16 lines ...
Pod	kube-system/calico-node-skjf6						system-node-critical pod "calico-node-skjf6" is pending
Pod	kube-system/calico-node-tzvl5						system-node-critical pod "calico-node-tzvl5" is pending
Pod	kube-system/coredns-5dc785954d-l7bgq					system-cluster-critical pod "coredns-5dc785954d-l7bgq" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-fn69k				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-fn69k" is pending
Pod	kube-system/kube-proxy-ip-172-20-41-18.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-41-18.eu-west-3.compute.internal" is pending

Validation Failed
W1015 18:30:54.287765    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 14 lines ...
Pod	kube-system/calico-node-skjf6						system-node-critical pod "calico-node-skjf6" is pending
Pod	kube-system/calico-node-tzvl5						system-node-critical pod "calico-node-tzvl5" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-l7bgq					system-cluster-critical pod "coredns-5dc785954d-l7bgq" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-fn69k				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-fn69k" is pending
Pod	kube-system/kube-proxy-ip-172-20-47-147.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-47-147.eu-west-3.compute.internal" is pending

Validation Failed
W1015 18:31:06.337239    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 11 lines ...
Pod	kube-system/calico-node-r4sgj		system-node-critical pod "calico-node-r4sgj" is not ready (calico-node)
Pod	kube-system/calico-node-skjf6		system-node-critical pod "calico-node-skjf6" is not ready (calico-node)
Pod	kube-system/calico-node-tzvl5		system-node-critical pod "calico-node-tzvl5" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-l7bgq	system-cluster-critical pod "coredns-5dc785954d-l7bgq" is pending
Pod	kube-system/coredns-5dc785954d-t9xgn	system-cluster-critical pod "coredns-5dc785954d-t9xgn" is pending

Validation Failed
W1015 18:31:18.416699    5028 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 301 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 8835 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-htnk7
I1015 18:35:47.839193    5151 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1015 18:35:52.841044    5151 instancegroups.go:417] deleting node "ip-172-20-47-147.eu-west-3.compute.internal" from kubernetes
I1015 18:35:52.946577    5151 instancegroups.go:591] Stopping instance "i-0a47193ed706a7730", node "ip-172-20-47-147.eu-west-3.compute.internal", in group "master-eu-west-3a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1015 18:35:53.155632    5151 instancegroups.go:435] waiting for 15s after terminating instance
I1015 18:36:08.158355    5151 instancegroups.go:470] Validating the cluster.
I1015 18:36:38.189762    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:37:38.220625    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:38:38.259603    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:39:38.305768    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:40:38.342484    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:41:38.382486    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:42:38.419950    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:43:38.453108    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:44:38.486474    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:45:38.520782    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:46:38.554752    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:47:38.589998    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:48:38.618393    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:49:38.666601    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:50:38.737091    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:51:38.773007    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:52:38.810646    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:53:38.853461    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:54:38.886380    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:55:38.922496    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:56:38.958770    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:57:38.992203    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:58:39.020711    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 18:59:39.064630    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 19:00:39.113192    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 19:01:39.153808    5151 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.37.239.187:443: i/o timeout.
I1015 19:02:12.013883    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:02:44.040747    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:03:15.951137    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:03:47.941399    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:04:19.855987    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:04:51.919304    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:05:23.934071    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:05:55.936786    5151 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
I1015 19:06:28.254215    5151 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-62-218.eu-west-3.compute.internal" of role "master" is not ready, node "ip-172-20-36-204.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-60-226.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-36-66.eu-west-3.compute.internal" of role "node" is not ready, node "ip-172-20-41-18.eu-west-3.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-dr9mh" is not ready (calico-node), system-node-critical pod "calico-node-ljkp5" is pending, system-node-critical pod "ebs-csi-node-lbnjj" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-62-218.eu-west-3.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-62-218.eu-west-3.compute.internal" is pending.
E1015 19:06:28.254266    5151 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.2GCIBgiG5 --down
I1015 19:06:28.283062    5168 app.go:59] RunDir for this run: "/logs/artifacts/14ea7309-2de5-11ec-a66e-da1c1b34387a"
I1015 19:06:28.283307    5168 app.go:90] ID for this run: "14ea7309-2de5-11ec-a66e-da1c1b34387a"
I1015 19:06:28.283345    5168 dumplogs.go:40] /tmp/kops.2GCIBgiG5 toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 19:06:28.299202    5176 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1051 lines ...
I1015 19:07:03.748593    5168 dumplogs.go:72] /tmp/kops.2GCIBgiG5 get cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1015 19:07:04.257605    5168 dumplogs.go:72] /tmp/kops.2GCIBgiG5 get instancegroups --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1015 19:07:05.125873    5168 dumplogs.go:91] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1015 19:08:05.936011    5168 dumplogs.go:114] /tmp/kops.2GCIBgiG5 toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu -o yaml
I1015 19:08:14.321176    5168 dumplogs.go:143] ssh -i /etc/aws-ssh/aws-ssh-private -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@35.180.54.250 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
Warning: Permanently added '35.180.54.250' (ECDSA) to the list of known hosts.
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get events)
W1015 19:09:15.997371    5168 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1015 19:09:15.997429    5168 down.go:48] /tmp/kops.2GCIBgiG5 delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1015 19:09:16.012770    5229 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1015 19:09:16.012849    5229 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1015 19:09:16.012856    5229 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-eu-west-3a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io					lt-0a5e45ee19644b7df
... skipping 438 lines ...