This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-16 20:25
Elapsed47m53s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1016 20:25:49.776974    4902 dumplogs.go:40] /tmp/kops.C6mKhqc1E toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1016 20:25:49.795904    4909 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 20:25:49.796125    4909 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 20:25:49.796227    4909 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1016 20:25:50.303933    4902 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1016 20:25:50.303988    4902 down.go:48] /tmp/kops.C6mKhqc1E delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1016 20:25:50.318026    4920 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 20:25:50.318108    4920 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 20:25:50.318114    4920 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1016 20:25:50.794618    4902 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/16 20:25:50 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1016 20:25:50.801729    4902 http.go:37] curl https://ip.jsb.workers.dev
I1016 20:25:50.906460    4902 up.go:144] /tmp/kops.C6mKhqc1E create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 34.121.71.138/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I1016 20:25:50.918724    4930 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 20:25:50.918805    4930 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 20:25:50.918808    4930 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 20:25:50.958451    4930 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 43 lines ...
I1016 20:26:10.479099    4902 up.go:181] /tmp/kops.C6mKhqc1E validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1016 20:26:10.492591    4950 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 20:26:10.492666    4950 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 20:26:10.492671    4950 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1016 20:26:11.882049    4950 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:26:21.913282    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:26:31.946813    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:26:41.978364    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:26:52.021587    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:27:02.054300    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
W1016 20:27:12.090492    4950 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:27:22.119364    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:27:32.163630    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:27:42.192837    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:27:52.221686    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:28:02.252333    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:28:12.286794    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:28:22.318069    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:28:32.348040    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:28:42.392511    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:28:52.440012    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:29:02.476081    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:29:12.508490    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:29:22.538442    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:29:32.569181    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:29:42.602480    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:29:52.633889    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:30:02.663255    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:30:12.692425    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 20:30:22.723122    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 9 lines ...
Machine	i-0bd9143996c0f6663					machine "i-0bd9143996c0f6663" has not yet joined cluster
Pod	kube-system/calico-kube-controllers-6b59cd85f8-k2m8b	system-cluster-critical pod "calico-kube-controllers-6b59cd85f8-k2m8b" is pending
Pod	kube-system/calico-node-z4rm4				system-node-critical pod "calico-node-z4rm4" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-xnjpl			system-cluster-critical pod "coredns-5dc785954d-xnjpl" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-tqcv6		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-tqcv6" is pending

Validation Failed
W1016 20:30:36.243532    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 11 lines ...
Pod	kube-system/calico-kube-controllers-6b59cd85f8-k2m8b			system-cluster-critical pod "calico-kube-controllers-6b59cd85f8-k2m8b" is pending
Pod	kube-system/calico-node-xbnf5						system-node-critical pod "calico-node-xbnf5" is pending
Pod	kube-system/coredns-5dc785954d-xnjpl					system-cluster-critical pod "coredns-5dc785954d-xnjpl" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-tqcv6				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-tqcv6" is pending
Pod	kube-system/kube-proxy-ip-172-20-36-120.ap-northeast-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-36-120.ap-northeast-1.compute.internal" is pending

Validation Failed
W1016 20:30:48.846505    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 15 lines ...
Pod	kube-system/calico-node-xbnf5						system-node-critical pod "calico-node-xbnf5" is pending
Pod	kube-system/coredns-5dc785954d-xnjpl					system-cluster-critical pod "coredns-5dc785954d-xnjpl" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-tqcv6				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-tqcv6" is pending
Pod	kube-system/kube-proxy-ip-172-20-59-0.ap-northeast-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-59-0.ap-northeast-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-59-160.ap-northeast-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-59-160.ap-northeast-1.compute.internal" is pending

Validation Failed
W1016 20:31:01.198130    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 13 lines ...
Pod	kube-system/calico-node-gt9r5						system-node-critical pod "calico-node-gt9r5" is pending
Pod	kube-system/calico-node-t4d52						system-node-critical pod "calico-node-t4d52" is pending
Pod	kube-system/coredns-5dc785954d-fntnt					system-cluster-critical pod "coredns-5dc785954d-fntnt" is pending
Pod	kube-system/coredns-5dc785954d-xnjpl					system-cluster-critical pod "coredns-5dc785954d-xnjpl" is pending
Pod	kube-system/kube-proxy-ip-172-20-50-110.ap-northeast-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-50-110.ap-northeast-1.compute.internal" is pending

Validation Failed
W1016 20:31:13.743230    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/calico-node-6nkk5		system-node-critical pod "calico-node-6nkk5" is not ready (calico-node)
Pod	kube-system/calico-node-gt9r5		system-node-critical pod "calico-node-gt9r5" is pending
Pod	kube-system/coredns-5dc785954d-fntnt	system-cluster-critical pod "coredns-5dc785954d-fntnt" is pending

Validation Failed
W1016 20:31:26.173770    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 6 lines ...
ip-172-20-59-160.ap-northeast-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-gt9r5	system-node-critical pod "calico-node-gt9r5" is not ready (calico-node)

Validation Failed
W1016 20:31:38.644479    4950 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 4382 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-cz66j
I1016 20:35:40.041143    5074 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1016 20:35:45.041458    5074 instancegroups.go:417] deleting node "ip-172-20-48-213.ap-northeast-1.compute.internal" from kubernetes
I1016 20:35:45.171953    5074 instancegroups.go:591] Stopping instance "i-081a148e99c61d9a2", node "ip-172-20-48-213.ap-northeast-1.compute.internal", in group "master-ap-northeast-1a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1016 20:35:45.432193    5074 instancegroups.go:435] waiting for 15s after terminating instance
I1016 20:36:00.437158    5074 instancegroups.go:470] Validating the cluster.
I1016 20:36:30.480617    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:37:30.510603    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:38:30.560696    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:39:30.612740    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:40:30.646421    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:41:30.703528    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:42:30.734254    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:43:30.765495    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:44:30.797901    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:45:30.828883    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:46:30.866428    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:47:30.897274    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:48:30.926547    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:49:30.955754    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:50:30.986692    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:51:31.033917    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:52:31.064179    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:53:31.097861    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:54:31.144199    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:55:31.191110    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:56:31.223855    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:57:31.267918    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:58:31.302090    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 20:59:31.333134    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 21:00:31.365141    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 21:01:31.426608    5074 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.150.213.10:443: i/o timeout.
I1016 21:02:04.935766    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:02:37.490711    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:03:09.847425    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:03:42.359028    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:04:14.675144    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:04:47.255735    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:05:19.591596    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:05:52.202004    5074 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
I1016 21:06:25.016808    5074 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-44-237.ap-northeast-1.compute.internal" of role "master" is not ready, node "ip-172-20-59-160.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-50-110.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-59-0.ap-northeast-1.compute.internal" of role "node" is not ready, node "ip-172-20-36-120.ap-northeast-1.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-szbq4" is pending, system-node-critical pod "calico-node-t9q9l" is pending, system-node-critical pod "ebs-csi-node-9t2w5" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-44-237.ap-northeast-1.compute.internal" is pending.
E1016 21:06:25.016850    5074 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.8MRdWPltG --down
I1016 21:06:25.038953    5093 app.go:59] RunDir for this run: "/logs/artifacts/161f6d70-2ebf-11ec-8a05-6acfde499713"
I1016 21:06:25.039100    5093 app.go:90] ID for this run: "161f6d70-2ebf-11ec-8a05-6acfde499713"
I1016 21:06:25.039122    5093 dumplogs.go:40] /tmp/kops.8MRdWPltG toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1016 21:06:25.053665    5102 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1051 lines ...
I1016 21:07:08.304358    5093 dumplogs.go:72] /tmp/kops.8MRdWPltG get cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1016 21:07:08.817353    5093 dumplogs.go:72] /tmp/kops.8MRdWPltG get instancegroups --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1016 21:07:09.559820    5093 dumplogs.go:91] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1016 21:08:10.576360    5093 dumplogs.go:114] /tmp/kops.8MRdWPltG toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu -o yaml
I1016 21:08:20.710621    5093 dumplogs.go:143] ssh -i /etc/aws-ssh/aws-ssh-private -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@13.230.9.60 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
Warning: Permanently added '13.230.9.60' (ECDSA) to the list of known hosts.
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get events)
W1016 21:09:22.583641    5093 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1016 21:09:22.583710    5093 down.go:48] /tmp/kops.8MRdWPltG delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1016 21:09:22.600197    5155 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1016 21:09:22.600281    5155 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1016 21:09:22.600286    5155 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-ap-northeast-1a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io				lt-067be6bc72ef1a15f
... skipping 424 lines ...