This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-10 04:53
Elapsed39m7s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0810 04:54:56.180955    6312 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 04:54:56.180981    6312 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
I0810 04:54:56.188223    6312 app.go:128] ID for this run: "3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
I0810 04:54:56.188531    6312 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0810 04:54:56.205421    6312 dumplogs.go:45] /tmp/kops.gG4Oq7f8o toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0810 04:54:56.205471    6312 local.go:42] ⚙️ /tmp/kops.gG4Oq7f8o toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0810 04:54:56.716554    6312 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0810 04:54:56.716621    6312 down.go:48] /tmp/kops.gG4Oq7f8o delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 04:54:56.716642    6312 local.go:42] ⚙️ /tmp/kops.gG4Oq7f8o delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 04:54:56.752640    6333 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 04:54:56.752733    6333 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033
... skipping 14 lines ...
I0810 04:54:58.605608    6370 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 04:54:58.605650    6370 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
I0810 04:54:58.609013    6370 app.go:128] ID for this run: "3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
I0810 04:54:58.609107    6370 up.go:44] Cleaning up any leaked resources from previous cluster
I0810 04:54:58.609167    6370 dumplogs.go:45] /tmp/kops.WCRknBUtL toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0810 04:54:58.609219    6370 local.go:42] ⚙️ /tmp/kops.WCRknBUtL toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0810 04:54:59.118523    6370 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0810 04:54:59.118572    6370 down.go:48] /tmp/kops.WCRknBUtL delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 04:54:59.118584    6370 local.go:42] ⚙️ /tmp/kops.WCRknBUtL delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0810 04:54:59.151096    6392 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 04:54:59.151204    6392 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0810 04:54:59.610191    6370 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/10 04:54:59 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0810 04:54:59.624043    6370 http.go:37] curl https://ip.jsb.workers.dev
I0810 04:54:59.747356    6370 up.go:159] /tmp/kops.WCRknBUtL create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 104.197.203.153/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0810 04:54:59.747402    6370 local.go:42] ⚙️ /tmp/kops.WCRknBUtL create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 104.197.203.153/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0810 04:54:59.779414    6402 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 04:54:59.779623    6402 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 04:54:59.795949    6402 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 525 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:55:46.474198    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:55:56.512210    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:56:06.543982    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:56:16.575814    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:56:26.610837    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:56:36.645113    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:56:46.725332    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:56:56.772263    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:57:06.804107    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:57:16.835297    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:57:26.882107    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:57:36.930295    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:57:46.963455    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:57:57.009844    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:58:07.042783    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:58:17.079617    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:58:27.126549    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0810 04:58:37.161046    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 12 lines ...
Pod	kube-system/calico-node-kfrrq			system-node-critical pod "calico-node-kfrrq" is pending
Pod	kube-system/coredns-5c44b6cf7d-dzqlc		system-cluster-critical pod "coredns-5c44b6cf7d-dzqlc" is pending
Pod	kube-system/coredns-autoscaler-f85cf5c-zhjbx	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zhjbx" is pending
Pod	kube-system/ebs-csi-node-7b6dp			system-node-critical pod "ebs-csi-node-7b6dp" is pending
Pod	kube-system/ebs-csi-node-kt8x2			system-node-critical pod "ebs-csi-node-kt8x2" is pending

Validation Failed
W0810 04:58:50.013080    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-zhjbx	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zhjbx" is pending
Pod	kube-system/ebs-csi-node-7b6dp			system-node-critical pod "ebs-csi-node-7b6dp" is pending
Pod	kube-system/ebs-csi-node-8wk4q			system-node-critical pod "ebs-csi-node-8wk4q" is pending
Pod	kube-system/ebs-csi-node-kt8x2			system-node-critical pod "ebs-csi-node-kt8x2" is pending
Pod	kube-system/ebs-csi-node-n629v			system-node-critical pod "ebs-csi-node-n629v" is pending

Validation Failed
W0810 04:59:01.979928    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-zhjbx	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zhjbx" is pending
Pod	kube-system/ebs-csi-node-7b6dp			system-node-critical pod "ebs-csi-node-7b6dp" is pending
Pod	kube-system/ebs-csi-node-8wk4q			system-node-critical pod "ebs-csi-node-8wk4q" is pending
Pod	kube-system/ebs-csi-node-kt8x2			system-node-critical pod "ebs-csi-node-kt8x2" is pending
Pod	kube-system/ebs-csi-node-n629v			system-node-critical pod "ebs-csi-node-n629v" is pending

Validation Failed
W0810 04:59:14.096427    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 18 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-zhjbx	system-cluster-critical pod "coredns-autoscaler-f85cf5c-zhjbx" is pending
Pod	kube-system/ebs-csi-node-7b6dp			system-node-critical pod "ebs-csi-node-7b6dp" is pending
Pod	kube-system/ebs-csi-node-8wk4q			system-node-critical pod "ebs-csi-node-8wk4q" is pending
Pod	kube-system/ebs-csi-node-kt8x2			system-node-critical pod "ebs-csi-node-kt8x2" is pending
Pod	kube-system/ebs-csi-node-n629v			system-node-critical pod "ebs-csi-node-n629v" is pending

Validation Failed
W0810 04:59:26.089957    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 14 lines ...
Pod	kube-system/calico-node-mbxh9		system-node-critical pod "calico-node-mbxh9" is pending
Pod	kube-system/coredns-5c44b6cf7d-zzshz	system-cluster-critical pod "coredns-5c44b6cf7d-zzshz" is pending
Pod	kube-system/ebs-csi-node-7b6dp		system-node-critical pod "ebs-csi-node-7b6dp" is pending
Pod	kube-system/ebs-csi-node-kt8x2		system-node-critical pod "ebs-csi-node-kt8x2" is pending
Pod	kube-system/ebs-csi-node-n629v		system-node-critical pod "ebs-csi-node-n629v" is pending

Validation Failed
W0810 04:59:38.232642    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 10 lines ...
Pod	kube-system/calico-node-5bgdm			system-node-critical pod "calico-node-5bgdm" is not ready (calico-node)
Pod	kube-system/calico-node-mbxh9			system-node-critical pod "calico-node-mbxh9" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-kt8x2			system-node-critical pod "ebs-csi-node-kt8x2" is pending
Pod	kube-system/ebs-csi-node-n629v			system-node-critical pod "ebs-csi-node-n629v" is pending
Pod	kube-system/kube-proxy-i-0bd53a2d3b1980aad	system-node-critical pod "kube-proxy-i-0bd53a2d3b1980aad" is pending

Validation Failed
W0810 04:59:50.242317    6442 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 548 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-d2ndq
evicting pod kube-system/dns-controller-6684cc95dc-rczhm
I0810 05:04:34.403690    6555 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0810 05:04:39.404349    6555 instancegroups.go:591] Stopping instance "i-0057ed835605a75da", node "i-0057ed835605a75da", in group "master-eu-west-3a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0810 05:04:39.700138    6555 instancegroups.go:436] waiting for 15s after terminating instance
I0810 05:04:54.703562    6555 instancegroups.go:470] Validating the cluster.
I0810 05:04:54.872946    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.72.44:443: connect: connection refused.
I0810 05:05:54.917194    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.72.44:443: i/o timeout.
I0810 05:06:54.955290    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.72.44:443: i/o timeout.
I0810 05:07:55.018828    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.72.44:443: i/o timeout.
I0810 05:08:55.082260    6555 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 13.38.72.44:443: i/o timeout.
I0810 05:09:28.033771    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05b23d54a6fe4b6b6" of role "node" is not ready, node "i-0aa460551514f3bc1" of role "node" is not ready, node "i-0bd53a2d3b1980aad" of role "node" is not ready, node "i-0dea54ea9e4c8b219" of role "node" is not ready, system-node-critical pod "calico-node-4dv78" is not ready (calico-node), system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-9dzxz" is not ready (ebs-plugin).
I0810 05:10:00.039751    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-0dea54ea9e4c8b219" of role "node" is not ready, system-cluster-critical pod "calico-kube-controllers-75f4df896c-j4d2r" is not ready (calico-kube-controllers), system-node-critical pod "calico-node-4dv78" is not ready (calico-node).
I0810 05:10:31.996366    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-j4d2r" is not ready (calico-kube-controllers).
I0810 05:11:03.985643    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-j4d2r" is not ready (calico-kube-controllers).
I0810 05:11:35.903558    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-j4d2r" is not ready (calico-kube-controllers).
I0810 05:12:07.978317    6555 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-j4d2r" is not ready (calico-kube-controllers).
... skipping 108 lines ...
I0810 05:29:56.648886    6604 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 05:29:56.648995    6604 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
I0810 05:29:56.653551    6604 app.go:128] ID for this run: "3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
I0810 05:29:56.653606    6604 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18 --parallel 25
I0810 05:29:56.687626    6625 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18/kubernetes-client-linux-amd64.tar.gz
F0810 05:29:59.030443    6625 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.5+a38bb7ed811a18: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.gG4Oq7f8o --down
I0810 05:29:59.205931    6814 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0810 05:29:59.209162    6814 app.go:61] The files in RunDir shall not be part of Artifacts
I0810 05:29:59.209208    6814 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0810 05:29:59.209253    6814 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/3d9e3738-1868-11ed-a3cd-9a8e9eec334c"
... skipping 272 lines ...