This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-17 06:25
Elapsed45m46s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1017 06:25:53.096515    4928 dumplogs.go:40] /tmp/kops.fIZZAHFrf toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1017 06:25:53.114315    4937 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 06:25:53.114407    4937 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 06:25:53.114412    4937 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1017 06:25:53.596346    4928 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1017 06:25:53.596476    4928 down.go:48] /tmp/kops.fIZZAHFrf delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1017 06:25:53.609218    4948 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 06:25:53.609356    4948 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 06:25:53.609396    4948 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1017 06:25:54.163804    4928 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/17 06:25:54 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1017 06:25:54.172170    4928 http.go:37] curl https://ip.jsb.workers.dev
I1017 06:25:54.261058    4928 up.go:144] /tmp/kops.fIZZAHFrf create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 35.202.185.108/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I1017 06:25:54.274913    4959 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 06:25:54.274975    4959 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 06:25:54.274978    4959 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 06:25:54.316135    4959 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1017 06:26:20.286846    4928 up.go:181] /tmp/kops.fIZZAHFrf validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1017 06:26:20.301848    4979 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1017 06:26:20.301961    4979 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1017 06:26:20.301965    4979 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1017 06:26:21.408417    4979 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:26:31.437768    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:26:41.464263    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:26:51.502478    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:27:01.530912    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:27:11.558839    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:27:21.619628    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:27:31.649040    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:27:41.692882    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:27:51.728462    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:28:01.759274    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:28:11.785895    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:28:21.815175    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:28:31.856546    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:28:41.891437    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:28:51.921293    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:29:01.951546    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:29:11.978145    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:29:22.006354    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:29:32.036193    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:29:42.079372    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:29:52.110041    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:30:02.140837    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:30:12.185495    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1017 06:30:22.217443    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 9 lines ...
Machine	i-0cfd025270b7829b4				machine "i-0cfd025270b7829b4" has not yet joined cluster
Node	ip-172-20-38-26.us-west-2.compute.internal	master "ip-172-20-38-26.us-west-2.compute.internal" is missing kube-controller-manager pod
Node	ip-172-20-38-26.us-west-2.compute.internal	master "ip-172-20-38-26.us-west-2.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-5dc785954d-q8bn2		system-cluster-critical pod "coredns-5dc785954d-q8bn2" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jdmwb	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jdmwb" is pending

Validation Failed
W1017 06:30:34.118479    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 11 lines ...
Node	ip-172-20-38-26.us-west-2.compute.internal					master "ip-172-20-38-26.us-west-2.compute.internal" is missing kube-scheduler pod
Pod	kube-system/calico-node-qknhk							system-node-critical pod "calico-node-qknhk" is pending
Pod	kube-system/coredns-5dc785954d-q8bn2						system-cluster-critical pod "coredns-5dc785954d-q8bn2" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jdmwb					system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jdmwb" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-38-26.us-west-2.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-38-26.us-west-2.compute.internal" is pending

Validation Failed
W1017 06:30:45.473272    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 20 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jdmwb					system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jdmwb" is pending
Pod	kube-system/etcd-manager-main-ip-172-20-38-26.us-west-2.compute.internal	system-cluster-critical pod "etcd-manager-main-ip-172-20-38-26.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-32-156.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-32-156.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-32-79.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-32-79.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-39-14.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-39-14.us-west-2.compute.internal" is pending

Validation Failed
W1017 06:30:56.938817    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 14 lines ...
Pod	kube-system/calico-node-qgc8m						system-node-critical pod "calico-node-qgc8m" is pending
Pod	kube-system/calico-node-qknhk						system-node-critical pod "calico-node-qknhk" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-q8bn2					system-cluster-critical pod "coredns-5dc785954d-q8bn2" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jdmwb				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jdmwb" is pending
Pod	kube-system/kube-proxy-ip-172-20-38-26.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-38-26.us-west-2.compute.internal" is pending

Validation Failed
W1017 06:31:08.289454    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/calico-node-qknhk			system-node-critical pod "calico-node-qknhk" is not ready (calico-node)
Pod	kube-system/coredns-autoscaler-84d4cfd89c-jdmwb	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-jdmwb" is pending

Validation Failed
W1017 06:31:19.645067    4979 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 4379 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-kbpgr
I1017 06:34:59.229323    5102 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1017 06:35:04.229822    5102 instancegroups.go:417] deleting node "ip-172-20-38-26.us-west-2.compute.internal" from kubernetes
I1017 06:35:04.301744    5102 instancegroups.go:591] Stopping instance "i-02bc56366cf7ca362", node "ip-172-20-38-26.us-west-2.compute.internal", in group "master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1017 06:35:04.490575    5102 instancegroups.go:435] waiting for 15s after terminating instance
I1017 06:35:19.491407    5102 instancegroups.go:470] Validating the cluster.
I1017 06:35:49.523492    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:36:49.549758    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:37:49.577826    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:38:49.635578    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:39:49.664606    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:40:49.693151    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:41:49.731943    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:42:49.775951    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:43:49.804619    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:44:49.846572    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:45:49.874540    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:46:49.905395    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:47:49.936578    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:48:49.965704    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:49:49.997416    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:50:50.028842    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:51:50.059955    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:52:50.096747    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:53:50.143838    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:54:50.197443    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:55:50.237259    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:56:50.269872    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:57:50.313434    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:58:50.350029    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 06:59:50.380619    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 07:00:50.411750    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 07:01:50.450531    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 07:02:50.482959    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 07:03:50.514884    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 07:04:50.544837    5102 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
I1017 07:05:50.587050    5102 instancegroups.go:513] Cluster did not validate within deadline: error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout.
E1017 07:05:50.587095    5102 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.0reJQ2xEA --down
I1017 07:05:50.612450    5120 app.go:59] RunDir for this run: "/logs/artifacts/e804be67-2f12-11ec-8a05-6acfde499713"
I1017 07:05:50.612607    5120 app.go:90] ID for this run: "e804be67-2f12-11ec-8a05-6acfde499713"
I1017 07:05:50.612636    5120 dumplogs.go:40] /tmp/kops.0reJQ2xEA toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1017 07:05:50.630547    5132 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1017 07:05:50.630629    5132 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1017 07:05:50.630634    5132 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
W1017 07:06:30.546755    5132 toolbox_dump.go:168] error listing nodes in cluster: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 34.220.55.54:443: i/o timeout
2021/10/17 07:06:30 dumping node not registered in kubernetes: 35.161.200.196
2021/10/17 07:06:30 Dumping node 35.161.200.196
2021/10/17 07:06:33 dumping node not registered in kubernetes: 35.161.250.137
2021/10/17 07:06:33 Dumping node 35.161.250.137
2021/10/17 07:06:36 dumping node not registered in kubernetes: 54.188.190.87
2021/10/17 07:06:36 Dumping node 54.188.190.87
... skipping 1049 lines ...
I1017 07:06:48.882574    5120 dumplogs.go:72] /tmp/kops.0reJQ2xEA get instancegroups --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1017 07:06:49.714053    5120 dumplogs.go:91] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1017 07:07:19.760221    5120 dumplogs.go:114] /tmp/kops.0reJQ2xEA toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu -o yaml
I1017 07:07:27.701076    5120 dumplogs.go:143] ssh -i /etc/aws-ssh/aws-ssh-private -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@54.188.190.87 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
Warning: Permanently added '54.188.190.87' (ECDSA) to the list of known hosts.
The connection to the server 127.0.0.1 was refused - did you specify the right host or port?
W1017 07:07:28.844712    5120 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1017 07:07:28.844770    5120 down.go:48] /tmp/kops.0reJQ2xEA delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1017 07:07:28.860882    5184 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1017 07:07:28.860965    5184 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1017 07:07:28.860969    5184 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io					lt-091cdbd744bfdd193
... skipping 456 lines ...