This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-15 08:25
Elapsed46m21s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1015 08:26:16.079657    4927 dumplogs.go:40] /tmp/kops.wyUjqoQYi toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 08:26:16.096126    4936 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 08:26:16.097110    4936 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 08:26:16.097139    4936 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1015 08:26:16.587403    4927 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1015 08:26:16.587467    4927 down.go:48] /tmp/kops.wyUjqoQYi delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1015 08:26:16.604043    4947 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 08:26:16.604845    4947 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 08:26:16.604850    4947 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1015 08:26:17.102510    4927 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/15 08:26:17 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1015 08:26:17.110003    4927 http.go:37] curl https://ip.jsb.workers.dev
I1015 08:26:17.184049    4927 up.go:144] /tmp/kops.wyUjqoQYi create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 34.68.229.223/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I1015 08:26:17.205260    4958 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 08:26:17.205376    4958 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 08:26:17.205397    4958 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 08:26:17.254076    4958 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1015 08:26:42.984207    4927 up.go:181] /tmp/kops.wyUjqoQYi validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1015 08:26:43.000800    4977 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1015 08:26:43.000907    4977 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1015 08:26:43.000912    4977 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1015 08:26:44.193413    4977 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:26:54.231341    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:27:04.261933    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:27:14.311581    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:27:24.340922    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:27:34.380162    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:27:44.412952    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:27:54.451431    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:28:04.495238    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:28:14.526581    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:28:24.559128    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:28:34.594667    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:28:44.626099    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:28:54.658564    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:29:04.688810    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:29:14.755249    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:29:24.788504    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:29:34.819802    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:29:44.856288    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:29:54.891191    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:30:04.923510    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:30:14.968388    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:30:25.008473    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:30:35.037349    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1015 08:30:45.070579    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 7 lines ...
Machine	i-07ae8e574e8453246				machine "i-07ae8e574e8453246" has not yet joined cluster
Machine	i-0888fe33e025ccdf8				machine "i-0888fe33e025ccdf8" has not yet joined cluster
Machine	i-0d729cd5bdd178e61				machine "i-0d729cd5bdd178e61" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-c6l2d		system-cluster-critical pod "coredns-5dc785954d-c6l2d" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-mgwwv	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-mgwwv" is pending

Validation Failed
W1015 08:30:57.071351    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 10 lines ...
Node	ip-172-20-34-173.us-west-2.compute.internal				node "ip-172-20-34-173.us-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/calico-node-4ctsl						system-node-critical pod "calico-node-4ctsl" is pending
Pod	kube-system/coredns-5dc785954d-c6l2d					system-cluster-critical pod "coredns-5dc785954d-c6l2d" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-mgwwv				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-mgwwv" is pending
Pod	kube-system/kube-proxy-ip-172-20-34-173.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-34-173.us-west-2.compute.internal" is pending

Validation Failed
W1015 08:31:08.486326    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 13 lines ...
Pod	kube-system/calico-node-v4cvq						system-node-critical pod "calico-node-v4cvq" is pending
Pod	kube-system/calico-node-xrpk5						system-node-critical pod "calico-node-xrpk5" is pending
Pod	kube-system/coredns-5dc785954d-c6l2d					system-cluster-critical pod "coredns-5dc785954d-c6l2d" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-mgwwv				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-mgwwv" is pending
Pod	kube-system/kube-proxy-ip-172-20-55-5.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-55-5.us-west-2.compute.internal" is pending

Validation Failed
W1015 08:31:19.901038    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 14 lines ...
Pod	kube-system/calico-node-mf55r			system-node-critical pod "calico-node-mf55r" is pending
Pod	kube-system/calico-node-v4cvq			system-node-critical pod "calico-node-v4cvq" is pending
Pod	kube-system/calico-node-xrpk5			system-node-critical pod "calico-node-xrpk5" is pending
Pod	kube-system/coredns-5dc785954d-c6l2d		system-cluster-critical pod "coredns-5dc785954d-c6l2d" is pending
Pod	kube-system/coredns-5dc785954d-hrmwh		system-cluster-critical pod "coredns-5dc785954d-hrmwh" is pending

Validation Failed
W1015 08:31:31.326883    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 9 lines ...
KIND	NAME					MESSAGE
Pod	kube-system/calico-node-mf55r		system-node-critical pod "calico-node-mf55r" is not ready (calico-node)
Pod	kube-system/calico-node-v4cvq		system-node-critical pod "calico-node-v4cvq" is not ready (calico-node)
Pod	kube-system/calico-node-xrpk5		system-node-critical pod "calico-node-xrpk5" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-hrmwh	system-cluster-critical pod "coredns-5dc785954d-hrmwh" is pending

Validation Failed
W1015 08:31:42.806785    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-mf55r	system-node-critical pod "calico-node-mf55r" is not ready (calico-node)
Pod	kube-system/calico-node-xrpk5	system-node-critical pod "calico-node-xrpk5" is not ready (calico-node)

Validation Failed
W1015 08:31:54.158062    4977 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 806 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 6573 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-d8wb6
I1015 08:36:00.647773    5101 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1015 08:36:05.650102    5101 instancegroups.go:417] deleting node "ip-172-20-42-99.us-west-2.compute.internal" from kubernetes
I1015 08:36:05.715628    5101 instancegroups.go:591] Stopping instance "i-0311513af04526156", node "ip-172-20-42-99.us-west-2.compute.internal", in group "master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1015 08:36:05.900576    5101 instancegroups.go:435] waiting for 15s after terminating instance
I1015 08:36:20.902258    5101 instancegroups.go:470] Validating the cluster.
I1015 08:36:50.933915    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:37:50.984957    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:38:51.020558    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:39:51.055903    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:40:51.087766    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:41:51.128996    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:42:51.180055    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:43:51.226377    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:44:51.261101    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:45:51.296585    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:46:51.329555    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:47:51.371971    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:48:51.408457    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:49:51.446105    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:50:51.479876    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:51:51.524455    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:52:51.563902    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:53:51.603811    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:54:51.651882    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:55:51.685095    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:56:51.734698    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:57:51.777930    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:58:51.815217    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 08:59:51.858973    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:00:51.923299    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:01:51.972296    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:02:52.022513    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:03:52.070575    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:04:52.109697    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:05:52.161397    5101 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
I1015 09:06:52.201209    5101 instancegroups.go:513] Cluster did not validate within deadline: error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout.
E1015 09:06:52.201256    5101 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.GBFXZcT7K --down
I1015 09:06:52.226840    5120 app.go:59] RunDir for this run: "/logs/artifacts/5d7af738-2d91-11ec-a9c0-767ea6e1a48e"
I1015 09:06:52.227035    5120 app.go:90] ID for this run: "5d7af738-2d91-11ec-a9c0-767ea6e1a48e"
I1015 09:06:52.227071    5120 dumplogs.go:40] /tmp/kops.GBFXZcT7K toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1015 09:06:52.245061    5127 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1015 09:06:52.245170    5127 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1015 09:06:52.245175    5127 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
W1015 09:07:29.682897    5127 toolbox_dump.go:168] error listing nodes in cluster: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.86.86.210:443: i/o timeout
2021/10/15 09:07:29 dumping node not registered in kubernetes: 34.220.56.228
2021/10/15 09:07:29 Dumping node 34.220.56.228
2021/10/15 09:07:33 dumping node not registered in kubernetes: 34.215.178.186
2021/10/15 09:07:33 Dumping node 34.215.178.186
2021/10/15 09:07:37 dumping node not registered in kubernetes: 52.12.105.15
2021/10/15 09:07:37 Dumping node 52.12.105.15
... skipping 1049 lines ...
I1015 09:07:48.841346    5120 dumplogs.go:72] /tmp/kops.GBFXZcT7K get instancegroups --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1015 09:07:49.643550    5120 dumplogs.go:91] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1015 09:08:19.692074    5120 dumplogs.go:114] /tmp/kops.GBFXZcT7K toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu -o yaml
I1015 09:08:27.029205    5120 dumplogs.go:143] ssh -i /etc/aws-ssh/aws-ssh-private -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@34.220.56.228 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
Warning: Permanently added '34.220.56.228' (ECDSA) to the list of known hosts.
The connection to the server 127.0.0.1 was refused - did you specify the right host or port?
W1015 09:08:28.192664    5120 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1015 09:08:28.192721    5120 down.go:48] /tmp/kops.GBFXZcT7K delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1015 09:08:28.213066    5178 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1015 09:08:28.213174    5178 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1015 09:08:28.213179    5178 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io					lt-0452eae8d29cbdb11
... skipping 452 lines ...