This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 1 succeeded
Started2022-09-06 19:01
Elapsed2h12m
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 217 lines ...
I0906 19:04:41.548254    6453 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/421c5f44-2e16-11ed-b09a-3e73d40f417f"
I0906 19:04:41.559638    6453 app.go:128] ID for this run: "421c5f44-2e16-11ed-b09a-3e73d40f417f"
I0906 19:04:41.560304    6453 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519
I0906 19:04:41.578175    6453 up.go:44] Cleaning up any leaked resources from previous cluster
I0906 19:04:41.578344    6453 dumplogs.go:45] /tmp/kops.AVtoQnW61 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0906 19:04:41.578452    6453 local.go:42] ⚙️ /tmp/kops.AVtoQnW61 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0906 19:04:42.241005    6453 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0906 19:04:42.241055    6453 down.go:48] /tmp/kops.AVtoQnW61 delete cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --yes
I0906 19:04:42.241068    6453 local.go:42] ⚙️ /tmp/kops.AVtoQnW61 delete cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --yes
I0906 19:04:42.281939    6474 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0906 19:04:42.282038    6474 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" not found
I0906 19:04:42.790632    6453 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/09/06 19:04:42 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0906 19:04:42.813088    6453 http.go:37] curl https://ip.jsb.workers.dev
I0906 19:04:42.977117    6453 up.go:159] /tmp/kops.AVtoQnW61 create cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.25.0 --ssh-public-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --discovery-store=s3://k8s-kops-prow/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/discovery --admin-access 34.68.176.126/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0906 19:04:42.977148    6453 local.go:42] ⚙️ /tmp/kops.AVtoQnW61 create cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.25.0 --ssh-public-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --discovery-store=s3://k8s-kops-prow/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/discovery --admin-access 34.68.176.126/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0906 19:04:43.039806    6484 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0906 19:04:43.040035    6484 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0906 19:04:43.077914    6484 create_cluster.go:843] Using SSH public key: /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 575 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:05:29.603666    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:05:39.647834    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:05:49.699349    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:05:59.745754    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:06:09.822205    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:06:19.879200    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:06:29.924557    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:06:39.999775    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:06:50.051731    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:07:00.149824    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:07:10.215166    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:07:20.262542    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:07:30.310781    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:07:40.365359    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:07:50.439528    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:08:00.517194    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:08:10.571373    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:08:20.633468    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:08:30.675111    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:08:40.723299    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0906 19:08:50.782314    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 21 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-w5v4x	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-w5v4x" is pending
Pod	kube-system/ebs-csi-node-4r5gd			system-node-critical pod "ebs-csi-node-4r5gd" is pending
Pod	kube-system/ebs-csi-node-8hxbk			system-node-critical pod "ebs-csi-node-8hxbk" is pending
Pod	kube-system/ebs-csi-node-92qq2			system-node-critical pod "ebs-csi-node-92qq2" is pending
Pod	kube-system/ebs-csi-node-fr8dh			system-node-critical pod "ebs-csi-node-fr8dh" is pending

Validation Failed
W0906 19:09:03.532377    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 20 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-w5v4x	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-w5v4x" is pending
Pod	kube-system/ebs-csi-node-4r5gd			system-node-critical pod "ebs-csi-node-4r5gd" is pending
Pod	kube-system/ebs-csi-node-8hxbk			system-node-critical pod "ebs-csi-node-8hxbk" is pending
Pod	kube-system/ebs-csi-node-92qq2			system-node-critical pod "ebs-csi-node-92qq2" is pending
Pod	kube-system/ebs-csi-node-fr8dh			system-node-critical pod "ebs-csi-node-fr8dh" is pending

Validation Failed
W0906 19:09:15.370916    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 16 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-w5v4x	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-w5v4x" is pending
Pod	kube-system/ebs-csi-node-4r5gd			system-node-critical pod "ebs-csi-node-4r5gd" is pending
Pod	kube-system/ebs-csi-node-8hxbk			system-node-critical pod "ebs-csi-node-8hxbk" is pending
Pod	kube-system/ebs-csi-node-92qq2			system-node-critical pod "ebs-csi-node-92qq2" is pending
Pod	kube-system/ebs-csi-node-fr8dh			system-node-critical pod "ebs-csi-node-fr8dh" is pending

Validation Failed
W0906 19:09:27.314546    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 14 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-9ftd9	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-9ftd9" is pending
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-w5v4x	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-w5v4x" is pending
Pod	kube-system/ebs-csi-node-4r5gd			system-node-critical pod "ebs-csi-node-4r5gd" is pending
Pod	kube-system/ebs-csi-node-8hxbk			system-node-critical pod "ebs-csi-node-8hxbk" is pending
Pod	kube-system/ebs-csi-node-92qq2			system-node-critical pod "ebs-csi-node-92qq2" is pending

Validation Failed
W0906 19:09:39.195405    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 11 lines ...
Pod	kube-system/calico-node-nxjjb			system-node-critical pod "calico-node-nxjjb" is not ready (calico-node)
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-w5v4x	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-w5v4x" is pending
Pod	kube-system/ebs-csi-node-4r5gd			system-node-critical pod "ebs-csi-node-4r5gd" is pending
Pod	kube-system/ebs-csi-node-8hxbk			system-node-critical pod "ebs-csi-node-8hxbk" is pending
Pod	kube-system/kube-proxy-i-04b470a35bdb4cd2c	system-node-critical pod "kube-proxy-i-04b470a35bdb4cd2c" is pending

Validation Failed
W0906 19:09:51.142790    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/ebs-csi-node-4r5gd			system-node-critical pod "ebs-csi-node-4r5gd" is pending
Pod	kube-system/kube-proxy-i-06c7c5bb36a3f6eab	system-node-critical pod "kube-proxy-i-06c7c5bb36a3f6eab" is pending

Validation Failed
W0906 19:10:03.126802    6524 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 195 lines ...
evicting pod kube-system/dns-controller-644b99f887-rtv2q
evicting pod kube-system/calico-kube-controllers-67f589c6cd-h7mjk
I0906 19:12:26.855494    6566 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0906 19:12:31.856427    6566 instancegroups.go:591] Stopping instance "i-013ba659e4b5ad6d2", node "i-013ba659e4b5ad6d2", in group "master-eu-west-2a.masters.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" (this may take a while).
I0906 19:12:32.124801    6566 instancegroups.go:436] waiting for 15s after terminating instance
I0906 19:12:47.132386    6566 instancegroups.go:470] Validating the cluster.
I0906 19:12:47.329855    6566 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.130.245.231:443: connect: connection refused.
I0906 19:13:47.418610    6566 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.130.245.231:443: i/o timeout.
I0906 19:14:47.484610    6566 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.130.245.231:443: i/o timeout.
I0906 19:15:47.553294    6566 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.130.245.231:443: i/o timeout.
I0906 19:16:47.620449    6566 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.130.245.231:443: i/o timeout.
I0906 19:17:20.434893    6566 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:17:32.512007    6566 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:17:44.436079    6566 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:17:56.420778    6566 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:18:08.368684    6566 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:18:20.305404    6566 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 222 lines ...
evicting pod kube-system/dns-controller-644b99f887-8dpp8
evicting pod kube-system/calico-kube-controllers-67f589c6cd-lvv7t
I0906 19:42:42.600667    6639 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0906 19:42:47.601082    6639 instancegroups.go:591] Stopping instance "i-01d17bb9b8e906528", node "i-01d17bb9b8e906528", in group "master-eu-west-2a.masters.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" (this may take a while).
I0906 19:42:47.863098    6639 instancegroups.go:436] waiting for 15s after terminating instance
I0906 19:43:02.864439    6639 instancegroups.go:470] Validating the cluster.
I0906 19:43:03.047819    6639 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.178.204.255:443: connect: connection refused.
I0906 19:44:03.109404    6639 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.178.204.255:443: i/o timeout.
I0906 19:45:03.160597    6639 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.178.204.255:443: i/o timeout.
I0906 19:46:03.212055    6639 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.178.204.255:443: i/o timeout.
I0906 19:47:03.291102    6639 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.178.204.255:443: i/o timeout.
I0906 19:47:36.027158    6639 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:47:47.902039    6639 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:47:59.686334    6639 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:48:11.543442    6639 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:48:23.300105    6639 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0906 19:48:35.202728    6639 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 227 lines ...
evicting pod kube-system/dns-controller-644b99f887-kbtjc
evicting pod kube-system/calico-kube-controllers-67f589c6cd-wphjc
I0906 20:16:43.528501    6708 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0906 20:16:48.528819    6708 instancegroups.go:591] Stopping instance "i-0dd01c8a2fb243b62", node "i-0dd01c8a2fb243b62", in group "master-eu-west-2a.masters.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" (this may take a while).
I0906 20:16:48.814325    6708 instancegroups.go:436] waiting for 15s after terminating instance
I0906 20:17:03.822863    6708 instancegroups.go:470] Validating the cluster.
I0906 20:17:04.036168    6708 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.8.116.72:443: connect: connection refused.
I0906 20:18:04.098743    6708 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.8.116.72:443: i/o timeout.
I0906 20:19:04.153418    6708 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.8.116.72:443: i/o timeout.
I0906 20:20:04.208524    6708 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.8.116.72:443: i/o timeout.
I0906 20:21:04.284159    6708 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.8.116.72:443: i/o timeout.
I0906 20:22:04.375035    6708 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.8.116.72:443: i/o timeout.
I0906 20:22:37.021599    6708 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-01a1dc38560808ba1" of role "node" is not ready, node "i-083dc57fc60bc1953" of role "node" is not ready, node "i-0c5e12ed275ab2eb2" of role "node" is not ready, node "i-0cc2987a6d7ee96e4" of role "node" is not ready, system-node-critical pod "calico-node-8w49l" is not ready (calico-node).
I0906 20:23:08.891659    6708 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-01a1dc38560808ba1" of role "node" is not ready, node "i-083dc57fc60bc1953" of role "node" is not ready, node "i-0c5e12ed275ab2eb2" of role "node" is not ready, node "i-0cc2987a6d7ee96e4" of role "node" is not ready, system-node-critical pod "calico-node-8w49l" is not ready (calico-node).
I0906 20:23:40.768102    6708 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-01a1dc38560808ba1" of role "node" is not ready, node "i-083dc57fc60bc1953" of role "node" is not ready, node "i-0c5e12ed275ab2eb2" of role "node" is not ready, node "i-0cc2987a6d7ee96e4" of role "node" is not ready, system-node-critical pod "calico-node-8w49l" is not ready (calico-node).
I0906 20:24:12.713689    6708 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-01a1dc38560808ba1" of role "node" is not ready, node "i-083dc57fc60bc1953" of role "node" is not ready, node "i-0c5e12ed275ab2eb2" of role "node" is not ready, node "i-0cc2987a6d7ee96e4" of role "node" is not ready, system-node-critical pod "calico-node-8w49l" is not ready (calico-node).
I0906 20:24:44.600909    6708 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-01a1dc38560808ba1" of role "node" is not ready, node "i-083dc57fc60bc1953" of role "node" is not ready, node "i-0c5e12ed275ab2eb2" of role "node" is not ready, node "i-0cc2987a6d7ee96e4" of role "node" is not ready, system-node-critical pod "calico-node-8w49l" is not ready (calico-node).
I0906 20:25:16.528201    6708 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-01a1dc38560808ba1" of role "node" is not ready, node "i-083dc57fc60bc1953" of role "node" is not ready, node "i-0c5e12ed275ab2eb2" of role "node" is not ready, node "i-0cc2987a6d7ee96e4" of role "node" is not ready, system-node-critical pod "calico-node-8w49l" is not ready (calico-node).
... skipping 251 lines ...
Warning: Permanently added '35.179.96.254' (ECDSA) to the list of known hosts.
I0906 20:55:40.721323    6768 dumplogs.go:248] ssh -i /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@35.179.96.254 -- rm -rf /tmp/cluster-info
I0906 20:55:40.721479    6768 local.go:42] ⚙️ ssh -i /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@35.179.96.254 -- rm -rf /tmp/cluster-info
Warning: Permanently added '35.179.96.254' (ECDSA) to the list of known hosts.
I0906 20:55:42.218827    6768 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I0906 20:55:42.218863    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W0906 20:55:43.048215    6768 dumplogs.go:132] Failed to get csinodes: exit status 1
I0906 20:55:43.048483    6768 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I0906 20:55:43.048500    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W0906 20:55:43.838137    6768 dumplogs.go:132] Failed to get csidrivers: exit status 1
I0906 20:55:43.838263    6768 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I0906 20:55:43.838275    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W0906 20:55:44.600737    6768 dumplogs.go:132] Failed to get storageclasses: exit status 1
I0906 20:55:44.600853    6768 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I0906 20:55:44.600863    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W0906 20:55:45.410548    6768 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I0906 20:55:45.410668    6768 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I0906 20:55:45.410678    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W0906 20:55:46.209288    6768 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I0906 20:55:46.209407    6768 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I0906 20:55:46.209418    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W0906 20:55:46.991335    6768 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I0906 20:55:46.991392    6768 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
I0906 20:55:47.890599    6768 dumplogs.go:162] kubectl get configmaps -n default -o yaml
I0906 20:55:47.890630    6768 local.go:42] ⚙️ kubectl get configmaps -n default -o yaml
I0906 20:55:48.354829    6768 dumplogs.go:188] /tmp/kops.AVtoQnW61 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I0906 20:55:48.354880    6768 local.go:42] ⚙️ /tmp/kops.AVtoQnW61 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I0906 20:56:09.599452    6768 dumplogs.go:217] ssh -i /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@35.179.96.254 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
... skipping 517 lines ...