This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-01 20:43
Elapsed31m48s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 198 lines ...
I0801 20:44:54.127307    6296 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/8f57e361-11da-11ed-bfc8-0eb9b3896f8e"
I0801 20:44:54.154704    6296 app.go:128] ID for this run: "8f57e361-11da-11ed-bfc8-0eb9b3896f8e"
I0801 20:44:54.155116    6296 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519
I0801 20:44:54.161797    6296 up.go:44] Cleaning up any leaked resources from previous cluster
I0801 20:44:54.161896    6296 dumplogs.go:45] /tmp/kops.Sg84wop4j toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0801 20:44:54.162226    6296 local.go:42] ⚙️ /tmp/kops.Sg84wop4j toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0801 20:44:54.694516    6296 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0801 20:44:54.694737    6296 down.go:48] /tmp/kops.Sg84wop4j delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I0801 20:44:54.694838    6296 local.go:42] ⚙️ /tmp/kops.Sg84wop4j delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I0801 20:44:54.710420    6316 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0801 20:44:54.710513    6316 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io" not found
I0801 20:44:55.206585    6296 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/01 20:44:55 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0801 20:44:55.217761    6296 http.go:37] curl https://ip.jsb.workers.dev
I0801 20:44:55.352029    6296 up.go:159] /tmp/kops.Sg84wop4j create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 104.154.38.161/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0801 20:44:55.352073    6296 local.go:42] ⚙️ /tmp/kops.Sg84wop4j create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 104.154.38.161/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0801 20:44:55.367240    6326 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0801 20:44:55.367332    6326 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0801 20:44:55.416398    6326 create_cluster.go:728] Using SSH public key: /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 492 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:45:43.207187    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:45:53.243264    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:46:03.275358    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:46:13.307013    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:46:23.341434    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:46:33.373556    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:46:43.409731    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:46:53.448400    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:47:03.486633    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:47:13.518532    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:47:23.556424    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:47:33.589607    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:47:43.620040    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:47:53.651805    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:48:03.698473    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:48:13.733191    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:48:23.768751    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:48:33.802769    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:48:43.833777    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:48:53.889428    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0801 20:49:03.939006    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 14 lines ...
Pod	kube-system/aws-node-termination-handler-q2rzx				system-node-critical pod "aws-node-termination-handler-q2rzx" is pending
Pod	kube-system/coredns-5dc785954d-p8hd9					system-cluster-critical pod "coredns-5dc785954d-p8hd9" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-58z2f				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-58z2f" is pending
Pod	kube-system/kube-proxy-ip-172-20-33-1.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-33-1.eu-west-3.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-54.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-54.eu-west-3.compute.internal" is pending

Validation Failed
W0801 20:49:16.840611    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 6 lines ...
ip-172-20-60-54.eu-west-3.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-42-23.eu-west-3.compute.internal	node "ip-172-20-42-23.eu-west-3.compute.internal" of role "node" is not ready

Validation Failed
W0801 20:49:28.941312    6364 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 945 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 293 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 1152 lines ...
WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-node-termination-handler-xnvxz, kube-system/kops-controller-whjxm
evicting pod kube-system/dns-controller-7fc66c4dd4-qwltw
I0801 20:52:10.351086    6461 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0801 20:52:15.354685    6461 instancegroups.go:591] Stopping instance "i-035952dbdc4d77ff3", node "ip-172-20-39-143.eu-west-3.compute.internal", in group "master-eu-west-3a.masters.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io" (this may take a while).
I0801 20:52:15.656889    6461 instancegroups.go:436] waiting for 15s after terminating instance
I0801 20:52:30.661782    6461 instancegroups.go:470] Validating the cluster.
I0801 20:53:00.699617    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 20:54:00.744665    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 20:55:00.787911    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 20:56:00.830173    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 20:57:00.865390    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 20:58:00.902524    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 20:59:00.939787    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:00:00.977922    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:01:01.031416    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:02:01.062009    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:03:01.102450    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:04:01.182749    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:05:01.235307    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:06:01.288854    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:07:01.323079    6461 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
I0801 21:08:01.377476    6461 instancegroups.go:513] Cluster did not validate within deadline: error listing nodes: Get "https://api.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.181.7.11:443: i/o timeout.
E0801 21:08:01.377535    6461 instancegroups.go:475] Cluster did not validate within 15m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"15m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.qeFUqHXUB --down
I0801 21:08:01.473636    6480 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0801 21:08:01.477601    6480 app.go:61] The files in RunDir shall not be part of Artifacts
I0801 21:08:01.477658    6480 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0801 21:08:01.477689    6480 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/8f57e361-11da-11ed-bfc8-0eb9b3896f8e"
... skipping 17 lines ...
Warning: Permanently added '15.188.84.66' (ECDSA) to the list of known hosts.
I0801 21:10:40.892727    6480 dumplogs.go:248] ssh -i /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@15.188.84.66 -- rm -rf /tmp/cluster-info
I0801 21:10:40.892777    6480 local.go:42] ⚙️ ssh -i /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@15.188.84.66 -- rm -rf /tmp/cluster-info
Warning: Permanently added '15.188.84.66' (ECDSA) to the list of known hosts.
I0801 21:10:42.463955    6480 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I0801 21:10:42.464000    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W0801 21:10:47.527280    6480 dumplogs.go:132] Failed to get csinodes: exit status 1
I0801 21:10:47.527449    6480 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I0801 21:10:47.527461    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W0801 21:10:52.590154    6480 dumplogs.go:132] Failed to get csidrivers: exit status 1
I0801 21:10:52.590409    6480 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I0801 21:10:52.590462    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W0801 21:10:57.653711    6480 dumplogs.go:132] Failed to get storageclasses: exit status 1
I0801 21:10:57.653872    6480 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I0801 21:10:57.653891    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W0801 21:11:02.715600    6480 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I0801 21:11:02.715793    6480 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I0801 21:11:02.715805    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W0801 21:11:07.779864    6480 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I0801 21:11:07.780090    6480 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I0801 21:11:07.780120    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W0801 21:11:12.841584    6480 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I0801 21:11:12.841630    6480 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W0801 21:11:17.900464    6480 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1
I0801 21:11:17.900535    6480 down.go:48] /tmp/kops.qeFUqHXUB delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I0801 21:11:17.900546    6480 local.go:42] ⚙️ /tmp/kops.qeFUqHXUB delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I0801 21:11:17.932013    6613 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0801 21:11:17.932239    6613 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
TYPE			NAME													ID
autoscaling-config	master-eu-west-3a.masters.e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io					lt-0307c4c1e3a8ce385
... skipping 522 lines ...