This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2022-11-17 13:55
Elapsed11m28s
Revisionmaster

Test Failures


kubetest2 Test 2.67s

exit status 255
				from junit_runner.xml

Filter through log files


Show 2 Passed Tests

Error lines from build-log.txt

ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})
Please run:

  $ gcloud auth login

to obtain new credentials.

... skipping 171 lines ...
I1117 13:56:22.107628    6109 http.go:37] curl https://storage.googleapis.com/kops-ci/markers/release-1.24/latest-ci-updown-green.txt
I1117 13:56:22.110380    6109 http.go:37] curl https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.24.5+v1.24.4-30-gb814fb998c/linux/amd64/kops
I1117 13:56:25.993346    6109 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519
I1117 13:56:26.001186    6109 up.go:44] Cleaning up any leaked resources from previous cluster
I1117 13:56:26.001314    6109 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops toolbox dump --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I1117 13:56:26.001340    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops toolbox dump --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W1117 13:56:26.493714    6109 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1117 13:56:26.493797    6109 down.go:48] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops delete cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --yes
I1117 13:56:26.493818    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops delete cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --yes
I1117 13:56:26.517563    6141 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io" not found
I1117 13:56:27.004853    6109 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/11/17 13:56:27 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1117 13:56:27.020713    6109 http.go:37] curl https://ip.jsb.workers.dev
I1117 13:56:27.116561    6109 up.go:167] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops create cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.14 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20221018 --channel=alpha --networking=calico --container-runtime=containerd --admin-access 35.224.130.54/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I1117 13:56:27.116608    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops create cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.14 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20221018 --channel=alpha --networking=calico --container-runtime=containerd --admin-access 35.224.130.54/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I1117 13:56:27.140213    6152 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I1117 13:56:27.169967    6152 create_cluster.go:864] Using SSH public key: /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519.pub
I1117 13:56:27.662758    6152 new_cluster.go:1168]  Cloud Provider ID = aws
... skipping 546 lines ...

I1117 13:57:11.135417    6109 up.go:251] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops validate cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1117 13:57:11.135482    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops validate cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1117 13:57:11.160910    6188 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io

W1117 13:57:12.490917    6188 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:57:22.540670    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:57:32.585014    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:57:42.626104    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:57:52.681329    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:58:02.723112    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:58:12.800710    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:58:22.844151    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:58:32.878648    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:58:42.915354    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:58:52.954330    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:59:02.989660    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:59:13.031722    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:59:23.069007    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:59:33.118306    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:59:43.163354    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 13:59:53.204231    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 14:00:03.242745    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 14:00:13.289203    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1117 14:00:23.330109    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
W1117 14:01:03.382565    6188 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
... skipping 14 lines ...
Pod	kube-system/calico-node-qzh6t			system-node-critical pod "calico-node-qzh6t" is pending
Pod	kube-system/coredns-autoscaler-85fcbbb64-sk2qf	system-cluster-critical pod "coredns-autoscaler-85fcbbb64-sk2qf" is pending
Pod	kube-system/ebs-csi-node-d5czr			system-node-critical pod "ebs-csi-node-d5czr" is pending
Pod	kube-system/ebs-csi-node-jqhkd			system-node-critical pod "ebs-csi-node-jqhkd" is pending
Pod	kube-system/ebs-csi-node-ngn67			system-node-critical pod "ebs-csi-node-ngn67" is pending

Validation Failed
W1117 14:01:16.335303    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-ncm9x			system-node-critical pod "calico-node-ncm9x" is pending
Pod	kube-system/calico-node-qzh6t			system-node-critical pod "calico-node-qzh6t" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-d5czr			system-node-critical pod "ebs-csi-node-d5czr" is pending
Pod	kube-system/ebs-csi-node-jqhkd			system-node-critical pod "ebs-csi-node-jqhkd" is pending
Pod	kube-system/ebs-csi-node-ngn67			system-node-critical pod "ebs-csi-node-ngn67" is pending

Validation Failed
W1117 14:01:28.485990    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-qzh6t						system-node-critical pod "calico-node-qzh6t" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-ngn67						system-node-critical pod "ebs-csi-node-ngn67" is pending
Pod	kube-system/kube-proxy-ip-172-20-32-123.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-32-123.eu-central-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-52-43.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-52-43.eu-central-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-58-78.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-58-78.eu-central-1.compute.internal" is pending

Validation Failed
W1117 14:01:40.512288    6188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 141 lines ...
ip-172-20-63-204.eu-central-1.compute.internal	master	True

Your cluster e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io is ready
I1117 14:03:42.129319    6109 up.go:105] cluster reported as up
I1117 14:03:42.129385    6109 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --ginkgo-args=--debug --test-args=-test.timeout=60m -num-nodes=0 --test-package-marker=stable-1.23.txt --parallel=25
I1117 14:03:42.154662    6198 featureflag.go:160] FeatureFlag "SpecOverrideFlag"=true
F1117 14:03:44.790550    6198 tester.go:477] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to get latest release name: exit status 1
I1117 14:03:44.795761    6109 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops toolbox dump --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I1117 14:03:44.795825    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops toolbox dump --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I1117 14:04:28.556726    6109 dumplogs.go:78] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops get cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io -o yaml
I1117 14:04:28.556773    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops get cluster --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io -o yaml
I1117 14:04:29.071048    6109 dumplogs.go:78] /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops get instancegroups --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io -o yaml
I1117 14:04:29.071088    6109 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/584e39e7-667f-11ed-a804-ce2adf6da13d/kops get instancegroups --name e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io -o yaml
... skipping 289 lines ...
route-table:rtb-0e7d0ebb158056941	ok
vpc:vpc-03135edbd775ac081	ok
dhcp-options:dopt-0ebd0f22102702648	ok
Deleted kubectl config for e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io

Deleted cluster: "e2e-e2e-kops-grid-calico-u2004-k23-ko24.test-cncf-aws.k8s.io"
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace