This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-10 01:52
Elapsed35m26s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0810 01:53:59.919021    6319 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 01:53:59.919051    6319 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0d83b17e-184f-11ed-a1d2-feaf62acafc2"
I0810 01:53:59.923810    6319 app.go:128] ID for this run: "0d83b17e-184f-11ed-a1d2-feaf62acafc2"
I0810 01:53:59.924488    6319 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0810 01:53:59.936984    6319 dumplogs.go:45] /tmp/kops.WNp0h7n7c toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0810 01:53:59.937146    6319 local.go:42] ⚙️ /tmp/kops.WNp0h7n7c toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0810 01:54:00.443632    6319 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0810 01:54:00.443693    6319 down.go:48] /tmp/kops.WNp0h7n7c delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 01:54:00.443704    6319 local.go:42] ⚙️ /tmp/kops.WNp0h7n7c delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 01:54:00.476256    6341 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 01:54:00.476350    6341 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0810 01:54:02.330065    6377 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 01:54:02.330094    6377 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0d83b17e-184f-11ed-a1d2-feaf62acafc2"
I0810 01:54:02.334054    6377 app.go:128] ID for this run: "0d83b17e-184f-11ed-a1d2-feaf62acafc2"
I0810 01:54:02.334129    6377 up.go:44] Cleaning up any leaked resources from previous cluster
I0810 01:54:02.334158    6377 dumplogs.go:45] /tmp/kops.LC9VKTgm9 toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0810 01:54:02.334205    6377 local.go:42] ⚙️ /tmp/kops.LC9VKTgm9 toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0810 01:54:02.878913    6377 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0810 01:54:02.878959    6377 down.go:48] /tmp/kops.LC9VKTgm9 delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 01:54:02.878970    6377 local.go:42] ⚙️ /tmp/kops.LC9VKTgm9 delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 01:54:02.913238    6399 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 01:54:02.913333    6399 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0810 01:54:03.384309    6377 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/10 01:54:03 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0810 01:54:03.397052    6377 http.go:37] curl https://ip.jsb.workers.dev
I0810 01:54:03.566894    6377 up.go:159] /tmp/kops.LC9VKTgm9 create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.239.12.69/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0810 01:54:03.566931    6377 local.go:42] ⚙️ /tmp/kops.LC9VKTgm9 create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 35.239.12.69/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0810 01:54:03.598416    6410 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 01:54:03.598522    6410 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 01:54:03.616511    6410 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 525 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:54:43.972966    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:54:54.010453    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:55:04.043387    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:55:14.080153    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:55:24.115955    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:55:34.165239    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:55:44.202878    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:55:54.241213    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:56:04.282277    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:56:14.318531    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:56:24.356289    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:56:34.396662    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:56:44.430888    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:56:54.464810    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:57:04.495929    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:57:14.531367    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:57:24.577203    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:57:34.618238    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 01:57:44.652543    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-6j4jp	system-cluster-critical pod "coredns-autoscaler-f85cf5c-6j4jp" is pending
Pod	kube-system/ebs-csi-node-4rjss			system-node-critical pod "ebs-csi-node-4rjss" is pending
Pod	kube-system/ebs-csi-node-cxlrq			system-node-critical pod "ebs-csi-node-cxlrq" is pending
Pod	kube-system/ebs-csi-node-hftr8			system-node-critical pod "ebs-csi-node-hftr8" is pending
Pod	kube-system/ebs-csi-node-spbd9			system-node-critical pod "ebs-csi-node-spbd9" is pending

Validation Failed
W0810 01:57:56.138476    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 17 lines ...
Pod	kube-system/coredns-5c44b6cf7d-jt627	system-cluster-critical pod "coredns-5c44b6cf7d-jt627" is pending
Pod	kube-system/ebs-csi-node-4rjss		system-node-critical pod "ebs-csi-node-4rjss" is pending
Pod	kube-system/ebs-csi-node-cxlrq		system-node-critical pod "ebs-csi-node-cxlrq" is pending
Pod	kube-system/ebs-csi-node-hftr8		system-node-critical pod "ebs-csi-node-hftr8" is pending
Pod	kube-system/ebs-csi-node-spbd9		system-node-critical pod "ebs-csi-node-spbd9" is pending

Validation Failed
W0810 01:58:07.149191    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 14 lines ...
Pod	kube-system/calico-node-r2tnf	system-node-critical pod "calico-node-r2tnf" is pending
Pod	kube-system/calico-node-r86hz	system-node-critical pod "calico-node-r86hz" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-4rjss	system-node-critical pod "ebs-csi-node-4rjss" is pending
Pod	kube-system/ebs-csi-node-cxlrq	system-node-critical pod "ebs-csi-node-cxlrq" is pending
Pod	kube-system/ebs-csi-node-spbd9	system-node-critical pod "ebs-csi-node-spbd9" is pending

Validation Failed
W0810 01:58:18.257860    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 11 lines ...
Node	i-095ab3b62310a71ea		node "i-095ab3b62310a71ea" of role "node" is not ready
Pod	kube-system/calico-node-bpnch	system-node-critical pod "calico-node-bpnch" is pending
Pod	kube-system/calico-node-r2tnf	system-node-critical pod "calico-node-r2tnf" is pending
Pod	kube-system/ebs-csi-node-4rjss	system-node-critical pod "ebs-csi-node-4rjss" is pending
Pod	kube-system/ebs-csi-node-cxlrq	system-node-critical pod "ebs-csi-node-cxlrq" is pending

Validation Failed
W0810 01:58:29.327396    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 13 lines ...
Pod	kube-system/calico-node-r2tnf			system-node-critical pod "calico-node-r2tnf" is pending
Pod	kube-system/ebs-csi-node-4rjss			system-node-critical pod "ebs-csi-node-4rjss" is pending
Pod	kube-system/ebs-csi-node-cxlrq			system-node-critical pod "ebs-csi-node-cxlrq" is pending
Pod	kube-system/kube-proxy-i-0750a51cd5f627569	system-node-critical pod "kube-proxy-i-0750a51cd5f627569" is pending
Pod	kube-system/kube-proxy-i-076607bdbba7c35f7	system-node-critical pod "kube-proxy-i-076607bdbba7c35f7" is pending

Validation Failed
W0810 01:58:40.422454    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 10 lines ...
Node	i-095ab3b62310a71ea		node "i-095ab3b62310a71ea" of role "node" is not ready
Pod	kube-system/calico-node-bpnch	system-node-critical pod "calico-node-bpnch" is not ready (calico-node)
Pod	kube-system/calico-node-r2tnf	system-node-critical pod "calico-node-r2tnf" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-4rjss	system-node-critical pod "ebs-csi-node-4rjss" is pending
Pod	kube-system/ebs-csi-node-cxlrq	system-node-critical pod "ebs-csi-node-cxlrq" is pending

Validation Failed
W0810 01:58:51.381953    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-bpnch	system-node-critical pod "calico-node-bpnch" is not ready (calico-node)
Pod	kube-system/calico-node-r2tnf	system-node-critical pod "calico-node-r2tnf" is not ready (calico-node)

Validation Failed
W0810 01:59:02.461307    6450 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 513 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-dwgnw
evicting pod kube-system/dns-controller-6684cc95dc-j7btd
I0810 02:03:23.687293    6565 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0810 02:03:28.688213    6565 instancegroups.go:591] Stopping instance "i-0d79b45cd957bf433", node "i-0d79b45cd957bf433", in group "master-ca-central-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0810 02:03:28.877693    6565 instancegroups.go:436] waiting for 15s after terminating instance
I0810 02:03:43.883598    6565 instancegroups.go:470] Validating the cluster.
I0810 02:03:43.996430    6565 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.182.84.128:443: connect: connection refused.
I0810 02:04:44.028854    6565 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.182.84.128:443: i/o timeout.
I0810 02:05:44.064471    6565 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.182.84.128:443: i/o timeout.
I0810 02:06:44.108338    6565 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.182.84.128:443: i/o timeout.
I0810 02:07:44.148312    6565 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.182.84.128:443: i/o timeout.
I0810 02:08:15.537154    6565 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-094c77aedcb3d4c26" of role "node" is not ready, node "i-095ab3b62310a71ea" of role "node" is not ready, system-cluster-critical pod "calico-kube-controllers-75f4df896c-97dms" is pending, system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-cd7jh" is not ready (ebs-plugin).
I0810 02:08:46.535429    6565 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-97dms" is not ready (calico-kube-controllers).
I0810 02:09:17.546181    6565 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-97dms" is not ready (calico-kube-controllers).
I0810 02:09:48.619051    6565 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0810 02:09:59.477534    6565 instancegroups.go:503] Cluster validated.
I0810 02:09:59.477587    6565 instancegroups.go:470] Validating the cluster.
... skipping 26 lines ...
I0810 02:17:12.675783    6565 instancegroups.go:503] Cluster validated.
I0810 02:17:12.675856    6565 instancegroups.go:400] Draining the node: "i-076607bdbba7c35f7".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-kgzh7, kube-system/ebs-csi-node-hftr8
evicting pod kube-system/coredns-autoscaler-f85cf5c-6j4jp
evicting pod kube-system/coredns-5c44b6cf7d-jt627
evicting pod kube-system/coredns-5c44b6cf7d-pvjwd
error when evicting pods/"coredns-5c44b6cf7d-jt627" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-jt627
I0810 02:17:24.668441    6565 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0810 02:17:29.668762    6565 instancegroups.go:591] Stopping instance "i-076607bdbba7c35f7", node "i-076607bdbba7c35f7", in group "nodes-ca-central-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0810 02:17:29.852205    6565 instancegroups.go:436] waiting for 15s after terminating instance
I0810 02:17:44.852451    6565 instancegroups.go:470] Validating the cluster.
I0810 02:17:45.839513    6565 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": InstanceGroup "nodes-ca-central-1a" did not have enough nodes 3 vs 4.
... skipping 68 lines ...
I0810 02:25:50.844386    6620 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 02:25:50.844418    6620 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0d83b17e-184f-11ed-a1d2-feaf62acafc2"
I0810 02:25:50.847950    6620 app.go:128] ID for this run: "0d83b17e-184f-11ed-a1d2-feaf62acafc2"
I0810 02:25:50.848179    6620 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.36+3e396dbac5c618 --parallel 25
I0810 02:25:50.872642    6640 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.36+3e396dbac5c618/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.36+3e396dbac5c618/kubernetes-client-linux-amd64.tar.gz
F0810 02:25:52.770755    6640 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.25.0-beta.0.36+3e396dbac5c618: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.WNp0h7n7c --down
I0810 02:25:52.805708    6825 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 02:25:52.806754    6825 app.go:61] The files in RunDir shall not be part of Artifacts
I0810 02:25:52.806783    6825 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 02:25:52.806807    6825 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/0d83b17e-184f-11ed-a1d2-feaf62acafc2"
... skipping 274 lines ...