This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-08 19:53
Elapsed33m59s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0808 19:54:21.659560    6304 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 19:54:21.659594    6304 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/a7d3b35a-1753-11ed-962a-a69410124b4c"
I0808 19:54:21.695592    6304 app.go:128] ID for this run: "a7d3b35a-1753-11ed-962a-a69410124b4c"
I0808 19:54:21.696048    6304 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0808 19:54:21.703916    6304 dumplogs.go:45] /tmp/kops.hMKpi1VKh toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0808 19:54:21.703960    6304 local.go:42] ⚙️ /tmp/kops.hMKpi1VKh toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0808 19:54:22.216248    6304 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0808 19:54:22.216297    6304 down.go:48] /tmp/kops.hMKpi1VKh delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 19:54:22.216307    6304 local.go:42] ⚙️ /tmp/kops.hMKpi1VKh delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 19:54:22.246245    6328 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 19:54:22.246386    6328 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0808 19:54:24.067409    6365 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 19:54:24.067433    6365 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/a7d3b35a-1753-11ed-962a-a69410124b4c"
I0808 19:54:24.073764    6365 app.go:128] ID for this run: "a7d3b35a-1753-11ed-962a-a69410124b4c"
I0808 19:54:24.073853    6365 up.go:44] Cleaning up any leaked resources from previous cluster
I0808 19:54:24.073894    6365 dumplogs.go:45] /tmp/kops.RItTOSQun toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0808 19:54:24.073936    6365 local.go:42] ⚙️ /tmp/kops.RItTOSQun toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0808 19:54:24.601706    6365 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0808 19:54:24.601924    6365 down.go:48] /tmp/kops.RItTOSQun delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 19:54:24.601956    6365 local.go:42] ⚙️ /tmp/kops.RItTOSQun delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 19:54:24.635503    6384 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 19:54:24.635752    6384 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0808 19:54:25.081570    6365 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/08 19:54:25 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0808 19:54:25.098432    6365 http.go:37] curl https://ip.jsb.workers.dev
I0808 19:54:25.251494    6365 up.go:159] /tmp/kops.RItTOSQun create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 34.132.148.30/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0808 19:54:25.251539    6365 local.go:42] ⚙️ /tmp/kops.RItTOSQun create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 34.132.148.30/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0808 19:54:25.285309    6396 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 19:54:25.285429    6396 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 19:54:25.302899    6396 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 525 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:55:05.227437    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:55:15.267452    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:55:25.301188    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:55:35.352494    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:55:45.385164    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:55:55.435880    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:56:05.489890    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:56:15.526031    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:56:25.567436    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:56:35.602362    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:56:45.644078    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:56:55.683612    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:57:05.718907    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:57:15.767880    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:57:25.804858    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:57:35.847769    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:57:45.884968    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:57:55.918527    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:58:05.960058    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:58:15.992713    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:58:26.032956    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 19:58:36.096399    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 18 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-gfpp5	system-cluster-critical pod "coredns-autoscaler-f85cf5c-gfpp5" is pending
Pod	kube-system/ebs-csi-node-2jqv7			system-node-critical pod "ebs-csi-node-2jqv7" is pending
Pod	kube-system/ebs-csi-node-9ndw4			system-node-critical pod "ebs-csi-node-9ndw4" is pending
Pod	kube-system/ebs-csi-node-mt8xp			system-node-critical pod "ebs-csi-node-mt8xp" is pending
Pod	kube-system/ebs-csi-node-v9blq			system-node-critical pod "ebs-csi-node-v9blq" is pending

Validation Failed
W0808 19:58:47.350095    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 16 lines ...
Pod	kube-system/calico-node-w9vhn	system-node-critical pod "calico-node-w9vhn" is pending
Pod	kube-system/ebs-csi-node-2jqv7	system-node-critical pod "ebs-csi-node-2jqv7" is pending
Pod	kube-system/ebs-csi-node-9ndw4	system-node-critical pod "ebs-csi-node-9ndw4" is pending
Pod	kube-system/ebs-csi-node-mt8xp	system-node-critical pod "ebs-csi-node-mt8xp" is pending
Pod	kube-system/ebs-csi-node-v9blq	system-node-critical pod "ebs-csi-node-v9blq" is pending

Validation Failed
W0808 19:58:58.388719    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-s7nq6	system-node-critical pod "calico-node-s7nq6" is pending
Pod	kube-system/calico-node-w9vhn	system-node-critical pod "calico-node-w9vhn" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-9ndw4	system-node-critical pod "ebs-csi-node-9ndw4" is pending
Pod	kube-system/ebs-csi-node-mt8xp	system-node-critical pod "ebs-csi-node-mt8xp" is pending
Pod	kube-system/ebs-csi-node-v9blq	system-node-critical pod "ebs-csi-node-v9blq" is pending

Validation Failed
W0808 19:59:09.546276    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME				MESSAGE
Node	i-03fc09b8b601b8005		node "i-03fc09b8b601b8005" of role "node" is not ready
Pod	kube-system/calico-node-s7nq6	system-node-critical pod "calico-node-s7nq6" is pending
Pod	kube-system/ebs-csi-node-v9blq	system-node-critical pod "ebs-csi-node-v9blq" is pending

Validation Failed
W0808 19:59:20.545898    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 9 lines ...
KIND	NAME						MESSAGE
Node	i-03fc09b8b601b8005				node "i-03fc09b8b601b8005" of role "node" is not ready
Pod	kube-system/calico-node-s7nq6			system-node-critical pod "calico-node-s7nq6" is pending
Pod	kube-system/ebs-csi-node-v9blq			system-node-critical pod "ebs-csi-node-v9blq" is pending
Pod	kube-system/kube-proxy-i-0968381362d9e05ce	system-node-critical pod "kube-proxy-i-0968381362d9e05ce" is pending

Validation Failed
W0808 19:59:31.498518    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-s7nq6	system-node-critical pod "calico-node-s7nq6" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-v9blq	system-node-critical pod "ebs-csi-node-v9blq" is pending

Validation Failed
W0808 19:59:42.456495    6436 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-rgb9x
evicting pod kube-system/dns-controller-6684cc95dc-xkn67
I0808 20:04:04.006772    6546 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0808 20:04:09.007477    6546 instancegroups.go:591] Stopping instance "i-032db714fb190c5f9", node "i-032db714fb190c5f9", in group "master-ca-central-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0808 20:04:09.184576    6546 instancegroups.go:436] waiting for 15s after terminating instance
I0808 20:04:24.185020    6546 instancegroups.go:470] Validating the cluster.
I0808 20:04:24.277639    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.183.19.195:443: connect: connection refused.
I0808 20:05:24.321616    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.183.19.195:443: i/o timeout.
I0808 20:06:24.365635    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.183.19.195:443: i/o timeout.
I0808 20:07:24.420777    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.183.19.195:443: i/o timeout.
I0808 20:08:24.477245    6546 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.183.19.195:443: i/o timeout.
I0808 20:08:55.731063    6546 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-xlzc8" is not ready (calico-kube-controllers), system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-2dcln" is not ready (ebs-plugin).
I0808 20:09:26.670716    6546 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0808 20:09:37.688081    6546 instancegroups.go:503] Cluster validated.
I0808 20:09:37.688155    6546 instancegroups.go:470] Validating the cluster.
I0808 20:09:38.490939    6546 instancegroups.go:503] Cluster validated.
I0808 20:09:38.490990    6546 instancegroups.go:311] Tainting 4 nodes in "nodes-ca-central-1a" instancegroup.
... skipping 52 lines ...
I0808 20:23:38.670542    6546 instancegroups.go:503] Cluster validated.
I0808 20:23:38.670623    6546 instancegroups.go:400] Draining the node: "i-0d52c8e34a23fc8d0".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-qlhvz, kube-system/ebs-csi-node-2jqv7
evicting pod kube-system/coredns-autoscaler-f85cf5c-gfpp5
evicting pod kube-system/coredns-5c44b6cf7d-4fd9g
evicting pod kube-system/coredns-5c44b6cf7d-w9cpw
error when evicting pods/"coredns-5c44b6cf7d-4fd9g" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-4fd9g
I0808 20:23:50.670855    6546 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0808 20:23:55.674966    6546 instancegroups.go:591] Stopping instance "i-0d52c8e34a23fc8d0", node "i-0d52c8e34a23fc8d0", in group "nodes-ca-central-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0808 20:23:55.874562    6546 instancegroups.go:436] waiting for 15s after terminating instance
I0808 20:24:10.876117    6546 instancegroups.go:470] Validating the cluster.
I0808 20:24:11.929452    6546 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 38 lines ...
I0808 20:24:26.482757    6594 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 20:24:26.482820    6594 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/a7d3b35a-1753-11ed-962a-a69410124b4c"
I0808 20:24:26.488774    6594 app.go:128] ID for this run: "a7d3b35a-1753-11ed-962a-a69410124b4c"
I0808 20:24:26.488819    6594 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1 --parallel 25
I0808 20:24:26.512051    6615 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1/kubernetes-client-linux-amd64.tar.gz
F0808 20:24:28.260139    6615 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.hMKpi1VKh --down
I0808 20:24:28.294594    6806 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 20:24:28.296716    6806 app.go:61] The files in RunDir shall not be part of Artifacts
I0808 20:24:28.296746    6806 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 20:24:28.296772    6806 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/a7d3b35a-1753-11ed-962a-a69410124b4c"
... skipping 322 lines ...