This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-08 01:53
Elapsed45m6s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0808 01:54:35.049417    6336 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 01:54:35.049445    6336 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/b3441324-16bc-11ed-bcf2-1217529f69d6"
I0808 01:54:35.058174    6336 app.go:128] ID for this run: "b3441324-16bc-11ed-bcf2-1217529f69d6"
I0808 01:54:35.058449    6336 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0808 01:54:35.067594    6336 dumplogs.go:45] /tmp/kops.VfhGzNP1W toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0808 01:54:35.067654    6336 local.go:42] ⚙️ /tmp/kops.VfhGzNP1W toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0808 01:54:35.787167    6336 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0808 01:54:35.787226    6336 down.go:48] /tmp/kops.VfhGzNP1W delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 01:54:35.787243    6336 local.go:42] ⚙️ /tmp/kops.VfhGzNP1W delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 01:54:35.839730    6359 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 01:54:35.839841    6359 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0808 01:54:37.949946    6395 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 01:54:37.950565    6395 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/b3441324-16bc-11ed-bcf2-1217529f69d6"
I0808 01:54:37.958968    6395 app.go:128] ID for this run: "b3441324-16bc-11ed-bcf2-1217529f69d6"
I0808 01:54:37.959067    6395 up.go:44] Cleaning up any leaked resources from previous cluster
I0808 01:54:37.959131    6395 dumplogs.go:45] /tmp/kops.KkVL1DbQ6 toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0808 01:54:37.959261    6395 local.go:42] ⚙️ /tmp/kops.KkVL1DbQ6 toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0808 01:54:38.501892    6395 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0808 01:54:38.501927    6395 down.go:48] /tmp/kops.KkVL1DbQ6 delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 01:54:38.501940    6395 local.go:42] ⚙️ /tmp/kops.KkVL1DbQ6 delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0808 01:54:38.541332    6413 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 01:54:38.541431    6413 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0808 01:54:38.992425    6395 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/08 01:54:39 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0808 01:54:39.006864    6395 http.go:37] curl https://ip.jsb.workers.dev
I0808 01:54:39.167902    6395 up.go:159] /tmp/kops.KkVL1DbQ6 create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.193.93.145/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-1a --master-size c5.large
I0808 01:54:39.167956    6395 local.go:42] ⚙️ /tmp/kops.KkVL1DbQ6 create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.193.93.145/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-1a --master-size c5.large
I0808 01:54:39.202613    6426 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 01:54:39.202705    6426 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 01:54:39.221234    6426 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 525 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:55:32.110720    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:55:42.159245    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:55:52.192937    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:56:02.242151    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:56:12.282481    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:56:22.316242    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:56:32.367813    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:56:42.406939    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:56:52.442518    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:57:02.498454    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:57:12.533362    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:57:22.568494    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:57:32.619408    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:57:42.656537    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:57:52.695513    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:58:02.745495    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:58:12.795166    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:58:22.830244    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:58:32.868693    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:58:42.904598    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:58:52.958056    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 01:59:02.997351    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 21 lines ...
Pod	kube-system/ebs-csi-node-67724			system-node-critical pod "ebs-csi-node-67724" is pending
Pod	kube-system/ebs-csi-node-8p5lg			system-node-critical pod "ebs-csi-node-8p5lg" is pending
Pod	kube-system/ebs-csi-node-jlx6z			system-node-critical pod "ebs-csi-node-jlx6z" is pending
Pod	kube-system/ebs-csi-node-l7d5c			system-node-critical pod "ebs-csi-node-l7d5c" is pending
Pod	kube-system/ebs-csi-node-zgg9l			system-node-critical pod "ebs-csi-node-zgg9l" is pending

Validation Failed
W0808 01:59:17.523466    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 18 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-j7zn9	system-cluster-critical pod "coredns-autoscaler-f85cf5c-j7zn9" is pending
Pod	kube-system/ebs-csi-node-8p5lg			system-node-critical pod "ebs-csi-node-8p5lg" is pending
Pod	kube-system/ebs-csi-node-jlx6z			system-node-critical pod "ebs-csi-node-jlx6z" is pending
Pod	kube-system/ebs-csi-node-l7d5c			system-node-critical pod "ebs-csi-node-l7d5c" is pending
Pod	kube-system/ebs-csi-node-zgg9l			system-node-critical pod "ebs-csi-node-zgg9l" is pending

Validation Failed
W0808 01:59:30.779784    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 18 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-j7zn9	system-cluster-critical pod "coredns-autoscaler-f85cf5c-j7zn9" is pending
Pod	kube-system/ebs-csi-node-8p5lg			system-node-critical pod "ebs-csi-node-8p5lg" is pending
Pod	kube-system/ebs-csi-node-jlx6z			system-node-critical pod "ebs-csi-node-jlx6z" is pending
Pod	kube-system/ebs-csi-node-l7d5c			system-node-critical pod "ebs-csi-node-l7d5c" is pending
Pod	kube-system/ebs-csi-node-zgg9l			system-node-critical pod "ebs-csi-node-zgg9l" is pending

Validation Failed
W0808 01:59:44.223126    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-h6tjs	system-node-critical pod "calico-node-h6tjs" is pending
Pod	kube-system/calico-node-xz9g4	system-node-critical pod "calico-node-xz9g4" is pending
Pod	kube-system/ebs-csi-node-8p5lg	system-node-critical pod "ebs-csi-node-8p5lg" is pending
Pod	kube-system/ebs-csi-node-jlx6z	system-node-critical pod "ebs-csi-node-jlx6z" is pending
Pod	kube-system/ebs-csi-node-l7d5c	system-node-critical pod "ebs-csi-node-l7d5c" is pending

Validation Failed
W0808 01:59:57.588682    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-xz9g4			system-node-critical pod "calico-node-xz9g4" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-8p5lg			system-node-critical pod "ebs-csi-node-8p5lg" is pending
Pod	kube-system/ebs-csi-node-jlx6z			system-node-critical pod "ebs-csi-node-jlx6z" is pending
Pod	kube-system/ebs-csi-node-l7d5c			system-node-critical pod "ebs-csi-node-l7d5c" is pending
Pod	kube-system/kube-proxy-i-0c754d7ed7278db5e	system-node-critical pod "kube-proxy-i-0c754d7ed7278db5e" is pending

Validation Failed
W0808 02:00:10.883347    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 6 lines ...
i-0c754d7ed7278db5e	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/ebs-csi-node-jlx6z	system-node-critical pod "ebs-csi-node-jlx6z" is pending

Validation Failed
W0808 02:00:24.230721    6465 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-6qnns
evicting pod kube-system/dns-controller-6684cc95dc-8b4wb
I0808 02:05:40.412952    6576 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0808 02:05:45.413219    6576 instancegroups.go:591] Stopping instance "i-065194f749745e70d", node "i-065194f749745e70d", in group "master-ap-southeast-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0808 02:05:45.860246    6576 instancegroups.go:436] waiting for 15s after terminating instance
I0808 02:06:00.863155    6576 instancegroups.go:470] Validating the cluster.
I0808 02:06:01.118379    6576 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.169.215.137:443: connect: connection refused.
I0808 02:07:01.158200    6576 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.169.215.137:443: i/o timeout.
I0808 02:08:01.214689    6576 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.169.215.137:443: i/o timeout.
I0808 02:09:01.259479    6576 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.169.215.137:443: i/o timeout.
I0808 02:10:01.324947    6576 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.169.215.137:443: i/o timeout.
I0808 02:11:01.368960    6576 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.169.215.137:443: i/o timeout.
I0808 02:11:36.237589    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-0337155a40b499785" of role "node" is not ready, node "i-086bf576cd53c4f66" of role "node" is not ready, node "i-0ad0225aad4baaa7b" of role "node" is not ready, node "i-0c754d7ed7278db5e" of role "node" is not ready, system-cluster-critical pod "calico-kube-controllers-75f4df896c-jhdvv" is pending, system-node-critical pod "calico-node-gmc4f" is not ready (calico-node), system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-f5mpl" is not ready (ebs-plugin), system-node-critical pod "ebs-csi-node-j5v9s" is pending.
I0808 02:12:09.472459    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-0ad0225aad4baaa7b" of role "node" is not ready, system-cluster-critical pod "calico-kube-controllers-75f4df896c-jhdvv" is not ready (calico-kube-controllers), system-node-critical pod "calico-node-gmc4f" is not ready (calico-node).
I0808 02:12:42.697406    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-jhdvv" is not ready (calico-kube-controllers).
I0808 02:13:15.898664    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-jhdvv" is not ready (calico-kube-controllers).
I0808 02:13:49.062774    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-jhdvv" is not ready (calico-kube-controllers).
I0808 02:14:22.424984    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-jhdvv" is not ready (calico-kube-controllers).
... skipping 18 lines ...
I0808 02:20:46.109142    6576 instancegroups.go:503] Cluster validated.
I0808 02:20:46.109221    6576 instancegroups.go:400] Draining the node: "i-0337155a40b499785".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wzmhb, kube-system/ebs-csi-node-zgg9l
evicting pod kube-system/coredns-autoscaler-f85cf5c-j7zn9
evicting pod kube-system/coredns-5c44b6cf7d-6fgzs
evicting pod kube-system/coredns-5c44b6cf7d-w97pg
error when evicting pods/"coredns-5c44b6cf7d-6fgzs" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-6fgzs
error when evicting pods/"coredns-5c44b6cf7d-6fgzs" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-6fgzs
I0808 02:21:05.315878    6576 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0808 02:21:10.316198    6576 instancegroups.go:591] Stopping instance "i-0337155a40b499785", node "i-0337155a40b499785", in group "nodes-ap-southeast-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0808 02:21:10.698143    6576 instancegroups.go:436] waiting for 15s after terminating instance
I0808 02:21:25.705846    6576 instancegroups.go:470] Validating the cluster.
I0808 02:21:29.041836    6576 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": InstanceGroup "nodes-ap-southeast-1a" did not have enough nodes 3 vs 4, system-node-critical pod "kube-proxy-i-0337155a40b499785" is not ready (kube-proxy).
... skipping 85 lines ...
I0808 02:34:29.653528    6626 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 02:34:29.653561    6626 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/b3441324-16bc-11ed-bcf2-1217529f69d6"
I0808 02:34:29.656895    6626 app.go:128] ID for this run: "b3441324-16bc-11ed-bcf2-1217529f69d6"
I0808 02:34:29.656951    6626 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1 --parallel 25
I0808 02:34:29.679678    6645 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1/kubernetes-client-linux-amd64.tar.gz
F0808 02:34:31.669836    6645 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.24+759785ea147bc1: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.VfhGzNP1W --down
I0808 02:34:31.705233    6835 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0808 02:34:31.707965    6835 app.go:61] The files in RunDir shall not be part of Artifacts
I0808 02:34:31.707994    6835 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0808 02:34:31.708022    6835 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/b3441324-16bc-11ed-bcf2-1217529f69d6"
... skipping 274 lines ...