This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-09 04:53
Elapsed39m26s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0809 04:54:48.382859    6318 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 04:54:48.382884    6318 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/18409e55-179f-11ed-ad9d-7278861c489e"
I0809 04:54:48.390038    6318 app.go:128] ID for this run: "18409e55-179f-11ed-ad9d-7278861c489e"
I0809 04:54:48.390339    6318 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0809 04:54:48.402281    6318 dumplogs.go:45] /tmp/kops.SNUi7B6O7 toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0809 04:54:48.402326    6318 local.go:42] ⚙️ /tmp/kops.SNUi7B6O7 toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0809 04:54:48.898937    6318 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0809 04:54:48.899006    6318 down.go:48] /tmp/kops.SNUi7B6O7 delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 04:54:48.899035    6318 local.go:42] ⚙️ /tmp/kops.SNUi7B6O7 delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 04:54:48.932406    6338 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 04:54:48.932506    6338 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0809 04:54:50.909282    6375 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 04:54:50.909313    6375 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/18409e55-179f-11ed-ad9d-7278861c489e"
I0809 04:54:50.951895    6375 app.go:128] ID for this run: "18409e55-179f-11ed-ad9d-7278861c489e"
I0809 04:54:50.952134    6375 up.go:44] Cleaning up any leaked resources from previous cluster
I0809 04:54:50.952221    6375 dumplogs.go:45] /tmp/kops.Y8CzPWsIF toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0809 04:54:50.952277    6375 local.go:42] ⚙️ /tmp/kops.Y8CzPWsIF toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0809 04:54:51.453351    6375 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0809 04:54:51.453434    6375 down.go:48] /tmp/kops.Y8CzPWsIF delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 04:54:51.453447    6375 local.go:42] ⚙️ /tmp/kops.Y8CzPWsIF delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0809 04:54:51.482486    6395 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 04:54:51.482568    6395 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0809 04:54:51.910060    6375 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/09 04:54:51 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0809 04:54:51.926041    6375 http.go:37] curl https://ip.jsb.workers.dev
I0809 04:54:52.050887    6375 up.go:159] /tmp/kops.Y8CzPWsIF create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.222.10.42/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I0809 04:54:52.051123    6375 local.go:42] ⚙️ /tmp/kops.Y8CzPWsIF create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.222.10.42/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I0809 04:54:52.080396    6407 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 04:54:52.080629    6407 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 04:54:52.097229    6407 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 525 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:55:42.087619    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:55:52.120133    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:56:02.152730    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:56:12.191289    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:56:22.225910    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:56:32.261350    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:56:42.309126    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:56:52.358027    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:57:02.394556    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:57:12.428632    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:57:22.464073    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:57:32.505806    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:57:42.561924    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:57:52.598204    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:58:02.649554    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:58:12.685339    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:58:22.720745    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:58:32.764528    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0809 04:58:42.806923    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 20 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-pzjw7	system-cluster-critical pod "coredns-autoscaler-f85cf5c-pzjw7" is pending
Pod	kube-system/ebs-csi-node-8xbj9			system-node-critical pod "ebs-csi-node-8xbj9" is pending
Pod	kube-system/ebs-csi-node-dqr9x			system-node-critical pod "ebs-csi-node-dqr9x" is pending
Pod	kube-system/ebs-csi-node-hrpnc			system-node-critical pod "ebs-csi-node-hrpnc" is pending
Pod	kube-system/ebs-csi-node-p69nf			system-node-critical pod "ebs-csi-node-p69nf" is pending

Validation Failed
W0809 04:58:56.464297    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-pzjw7	system-cluster-critical pod "coredns-autoscaler-f85cf5c-pzjw7" is pending
Pod	kube-system/ebs-csi-node-8xbj9			system-node-critical pod "ebs-csi-node-8xbj9" is pending
Pod	kube-system/ebs-csi-node-dqr9x			system-node-critical pod "ebs-csi-node-dqr9x" is pending
Pod	kube-system/ebs-csi-node-hrpnc			system-node-critical pod "ebs-csi-node-hrpnc" is pending
Pod	kube-system/ebs-csi-node-p69nf			system-node-critical pod "ebs-csi-node-p69nf" is pending

Validation Failed
W0809 04:59:09.023915    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 17 lines ...
Pod	kube-system/coredns-5c44b6cf7d-tcqhp	system-cluster-critical pod "coredns-5c44b6cf7d-tcqhp" is pending
Pod	kube-system/ebs-csi-node-8xbj9		system-node-critical pod "ebs-csi-node-8xbj9" is pending
Pod	kube-system/ebs-csi-node-dqr9x		system-node-critical pod "ebs-csi-node-dqr9x" is pending
Pod	kube-system/ebs-csi-node-hrpnc		system-node-critical pod "ebs-csi-node-hrpnc" is pending
Pod	kube-system/ebs-csi-node-p69nf		system-node-critical pod "ebs-csi-node-p69nf" is pending

Validation Failed
W0809 04:59:21.719052    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 13 lines ...
Pod	kube-system/calico-node-x5w5f	system-node-critical pod "calico-node-x5w5f" is pending
Pod	kube-system/calico-node-zdzf5	system-node-critical pod "calico-node-zdzf5" is pending
Pod	kube-system/ebs-csi-node-8xbj9	system-node-critical pod "ebs-csi-node-8xbj9" is pending
Pod	kube-system/ebs-csi-node-dqr9x	system-node-critical pod "ebs-csi-node-dqr9x" is pending
Pod	kube-system/ebs-csi-node-p69nf	system-node-critical pod "ebs-csi-node-p69nf" is pending

Validation Failed
W0809 04:59:34.217657    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-zdzf5			system-node-critical pod "calico-node-zdzf5" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-8xbj9			system-node-critical pod "ebs-csi-node-8xbj9" is pending
Pod	kube-system/ebs-csi-node-dqr9x			system-node-critical pod "ebs-csi-node-dqr9x" is pending
Pod	kube-system/ebs-csi-node-p69nf			system-node-critical pod "ebs-csi-node-p69nf" is pending
Pod	kube-system/kube-proxy-i-07e30e6449a7ac429	system-node-critical pod "kube-proxy-i-07e30e6449a7ac429" is pending

Validation Failed
W0809 04:59:46.906911    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 6 lines ...
i-084ead0cf9629264b	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/ebs-csi-node-dqr9x	system-node-critical pod "ebs-csi-node-dqr9x" is pending

Validation Failed
W0809 04:59:59.654381    6447 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.medium	4	4	ap-northeast-1a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-l55sg
evicting pod kube-system/dns-controller-6684cc95dc-5kvf5
I0809 05:04:59.870138    6559 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0809 05:05:04.872661    6559 instancegroups.go:591] Stopping instance "i-011c92d634ab969b8", node "i-011c92d634ab969b8", in group "master-ap-northeast-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0809 05:05:05.271474    6559 instancegroups.go:436] waiting for 15s after terminating instance
I0809 05:05:20.271686    6559 instancegroups.go:470] Validating the cluster.
I0809 05:05:20.487230    6559 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.231.153.91:443: connect: connection refused.
I0809 05:06:20.538464    6559 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.231.153.91:443: i/o timeout.
I0809 05:07:20.573174    6559 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.231.153.91:443: i/o timeout.
I0809 05:08:20.606697    6559 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.231.153.91:443: i/o timeout.
I0809 05:09:20.645034    6559 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.231.153.91:443: i/o timeout.
I0809 05:09:54.428344    6559 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-lfvr4" is not ready (calico-kube-controllers).
I0809 05:10:27.078812    6559 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-lfvr4" is not ready (calico-kube-controllers).
I0809 05:10:59.658865    6559 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-lfvr4" is not ready (calico-kube-controllers).
I0809 05:11:32.209659    6559 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-lfvr4" is not ready (calico-kube-controllers).
I0809 05:12:04.823241    6559 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-lfvr4" is not ready (calico-kube-controllers).
I0809 05:12:37.656939    6559 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 13 lines ...
I0809 05:16:07.951747    6559 instancegroups.go:503] Cluster validated.
I0809 05:16:07.951823    6559 instancegroups.go:400] Draining the node: "i-02c771f9606d80bc8".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-bjrl2, kube-system/ebs-csi-node-hrpnc
evicting pod kube-system/coredns-autoscaler-f85cf5c-pzjw7
evicting pod kube-system/coredns-5c44b6cf7d-x6tdf
evicting pod kube-system/coredns-5c44b6cf7d-tcqhp
error when evicting pods/"coredns-5c44b6cf7d-tcqhp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-tcqhp
error when evicting pods/"coredns-5c44b6cf7d-tcqhp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-tcqhp
I0809 05:16:26.432901    6559 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0809 05:16:31.433087    6559 instancegroups.go:591] Stopping instance "i-02c771f9606d80bc8", node "i-02c771f9606d80bc8", in group "nodes-ap-northeast-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0809 05:16:31.868773    6559 instancegroups.go:436] waiting for 15s after terminating instance
I0809 05:16:46.875962    6559 instancegroups.go:470] Validating the cluster.
I0809 05:16:49.461809    6559 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": InstanceGroup "nodes-ap-northeast-1a" did not have enough nodes 3 vs 4.
... skipping 85 lines ...
I0809 05:29:31.691823    6609 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 05:29:31.691880    6609 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/18409e55-179f-11ed-ad9d-7278861c489e"
I0809 05:29:31.695771    6609 app.go:128] ID for this run: "18409e55-179f-11ed-ad9d-7278861c489e"
I0809 05:29:31.695843    6609 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be --parallel 25
I0809 05:29:31.752255    6629 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be/kubernetes-client-linux-amd64.tar.gz
F0809 05:29:34.478143    6629 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.28+25a3274a4f62be: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.SNUi7B6O7 --down
I0809 05:29:34.525279    6815 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0809 05:29:34.528145    6815 app.go:61] The files in RunDir shall not be part of Artifacts
I0809 05:29:34.528179    6815 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0809 05:29:34.528204    6815 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/18409e55-179f-11ed-ad9d-7278861c489e"
... skipping 302 lines ...