This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-16 16:25
Elapsed46m27s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 176 lines ...
I1016 16:25:59.333785    4935 dumplogs.go:40] /tmp/kops.xOz7cF1vD toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1016 16:25:59.350211    4945 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 16:25:59.350288    4945 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 16:25:59.350292    4945 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1016 16:25:59.887994    4935 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1016 16:25:59.888065    4935 down.go:48] /tmp/kops.xOz7cF1vD delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1016 16:25:59.903483    4955 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 16:25:59.903674    4955 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 16:25:59.903701    4955 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1016 16:26:00.411343    4935 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/16 16:26:00 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1016 16:26:00.418171    4935 http.go:37] curl https://ip.jsb.workers.dev
I1016 16:26:00.528142    4935 up.go:144] /tmp/kops.xOz7cF1vD create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 35.225.74.23/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I1016 16:26:00.543992    4966 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 16:26:00.544293    4966 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 16:26:00.544315    4966 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 16:26:00.588863    4966 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1016 16:26:26.416108    4935 up.go:181] /tmp/kops.xOz7cF1vD validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1016 16:26:26.432402    4986 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 16:26:26.432523    4986 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 16:26:26.432530    4986 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1016 16:26:27.587463    4986 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1016 16:26:37.620699    4986 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:26:47.657205    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:26:57.701439    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:27:07.745260    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:27:17.777965    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:27:27.810416    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:27:37.856519    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:27:47.899262    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:27:57.926848    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:28:07.960218    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
W1016 16:28:17.997326    4986 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1016 16:28:28.030013    4986 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:28:38.066525    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:28:48.104017    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:28:58.138675    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:29:08.172365    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:29:18.202764    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:29:28.232223    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:29:38.277762    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 16:29:48.315513    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
W1016 16:29:58.333637    4986 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
... skipping 15 lines ...
Pod	kube-system/calico-node-w7lqr						system-node-critical pod "calico-node-w7lqr" is pending
Pod	kube-system/coredns-5dc785954d-6dgnh					system-cluster-critical pod "coredns-5dc785954d-6dgnh" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-lx6kx				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-lx6kx" is pending
Pod	kube-system/kube-proxy-ip-172-20-49-228.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-49-228.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-195.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-195.us-west-2.compute.internal" is pending

Validation Failed
W1016 16:30:10.247372    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 14 lines ...
Pod	kube-system/calico-node-f48tl			system-node-critical pod "calico-node-f48tl" is pending
Pod	kube-system/calico-node-grh22			system-node-critical pod "calico-node-grh22" is pending
Pod	kube-system/calico-node-w7lqr			system-node-critical pod "calico-node-w7lqr" is pending
Pod	kube-system/coredns-5dc785954d-6dgnh		system-cluster-critical pod "coredns-5dc785954d-6dgnh" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-lx6kx	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-lx6kx" is pending

Validation Failed
W1016 16:30:21.695921    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 9 lines ...
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-5xxbq	system-node-critical pod "calico-node-5xxbq" is not ready (calico-node)
Pod	kube-system/calico-node-f48tl	system-node-critical pod "calico-node-f48tl" is not ready (calico-node)
Pod	kube-system/calico-node-grh22	system-node-critical pod "calico-node-grh22" is not ready (calico-node)
Pod	kube-system/calico-node-w7lqr	system-node-critical pod "calico-node-w7lqr" is not ready (calico-node)

Validation Failed
W1016 16:30:33.054174    4986 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 34 lines ...
ip-172-20-55-66.us-west-2.compute.internal	master	True
ip-172-20-58-12.us-west-2.compute.internal	node	True
ip-172-20-60-195.us-west-2.compute.internal	node	True

Your cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io is ready
I1016 16:31:07.345741    4986 validate_cluster.go:209] (will retry): cluster passed validation 3 consecutive times
W1016 16:31:17.385649    4986 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
... skipping 798 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 301 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 6192 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-dtxf5
I1016 16:34:54.625625    5111 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1016 16:34:59.626484    5111 instancegroups.go:417] deleting node "ip-172-20-55-66.us-west-2.compute.internal" from kubernetes
I1016 16:34:59.702816    5111 instancegroups.go:591] Stopping instance "i-07a14b2b112239f9b", node "ip-172-20-55-66.us-west-2.compute.internal", in group "master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1016 16:34:59.890012    5111 instancegroups.go:435] waiting for 15s after terminating instance
I1016 16:35:14.890238    5111 instancegroups.go:470] Validating the cluster.
I1016 16:35:44.918023    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:36:44.954273    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:37:44.983155    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:38:45.014451    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:39:45.046858    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:40:45.082028    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:41:45.118616    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:42:45.154932    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:43:45.188504    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:44:45.223608    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:45:45.255481    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:46:45.288310    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:47:45.352346    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:48:45.389732    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:49:45.456953    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:50:45.489701    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:51:45.527160    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:52:45.562965    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:53:45.611217    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:54:45.638764    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:55:45.672891    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:56:45.712570    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:57:45.746887    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:58:45.794489    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 16:59:45.824365    5111 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.202.14.54:443: i/o timeout.
I1016 17:00:18.018717    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:00:49.389707    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:01:20.968697    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:01:52.590631    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:02:24.089965    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:02:55.643042    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:03:27.142675    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:03:58.612362    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:04:30.252824    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:05:01.990534    5111 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
I1016 17:05:33.594008    5111 instancegroups.go:523] Cluster did not pass validation within deadline: node "ip-172-20-43-66.us-west-2.compute.internal" of role "master" is not ready, node "ip-172-20-58-12.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-46-70.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-60-195.us-west-2.compute.internal" of role "node" is not ready, node "ip-172-20-49-228.us-west-2.compute.internal" of role "node" is not ready, system-node-critical pod "calico-node-56wk9" is pending, system-node-critical pod "calico-node-nffxx" is pending, system-node-critical pod "ebs-csi-node-h5nmn" is pending, system-cluster-critical pod "etcd-manager-events-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "etcd-manager-main-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-apiserver-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-controller-manager-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-node-critical pod "kube-proxy-ip-172-20-43-66.us-west-2.compute.internal" is pending, system-cluster-critical pod "kube-scheduler-ip-172-20-43-66.us-west-2.compute.internal" is pending.
E1016 17:05:33.594176    5111 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.bu2zABmgm --down
I1016 17:05:33.622558    5129 app.go:59] RunDir for this run: "/logs/artifacts/8f00e00a-2e9d-11ec-8a05-6acfde499713"
I1016 17:05:33.622721    5129 app.go:90] ID for this run: "8f00e00a-2e9d-11ec-8a05-6acfde499713"
I1016 17:05:33.622793    5129 dumplogs.go:40] /tmp/kops.bu2zABmgm toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1016 17:05:33.640829    5137 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
... skipping 1051 lines ...
I1016 17:06:00.159200    5129 dumplogs.go:72] /tmp/kops.bu2zABmgm get cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1016 17:06:00.760922    5129 dumplogs.go:72] /tmp/kops.bu2zABmgm get instancegroups --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io -o yaml
I1016 17:06:01.585769    5129 dumplogs.go:91] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1016 17:07:02.135193    5129 dumplogs.go:114] /tmp/kops.bu2zABmgm toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu -o yaml
I1016 17:07:09.546651    5129 dumplogs.go:143] ssh -i /etc/aws-ssh/aws-ssh-private -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@54.245.24.226 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
Warning: Permanently added '54.245.24.226' (ECDSA) to the list of known hosts.
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get events)
W1016 17:08:10.973731    5129 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1016 17:08:10.973799    5129 down.go:48] /tmp/kops.bu2zABmgm delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1016 17:08:10.990343    5185 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1016 17:08:10.990482    5185 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1016 17:08:10.990491    5185 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-us-west-2a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io					lt-05a3dcacd76692d6a
... skipping 450 lines ...