This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-08-11 01:53
Elapsed35m58s
Revisionmaster

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 234 lines ...
I0811 01:54:35.434223    6367 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0811 01:54:35.434249    6367 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/49f221cb-1918-11ed-b2a2-1215444f8a61"
I0811 01:54:35.492319    6367 app.go:128] ID for this run: "49f221cb-1918-11ed-b2a2-1215444f8a61"
I0811 01:54:35.492517    6367 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519
I0811 01:54:35.498878    6367 dumplogs.go:45] /tmp/kops.CfrBkzusN toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0811 01:54:35.498925    6367 local.go:42] ⚙️ /tmp/kops.CfrBkzusN toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0811 01:54:35.990881    6367 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0811 01:54:35.990957    6367 down.go:48] /tmp/kops.CfrBkzusN delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0811 01:54:35.990973    6367 local.go:42] ⚙️ /tmp/kops.CfrBkzusN delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0811 01:54:36.025579    6384 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 01:54:36.025794    6384 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ [[ l == \v ]]
++ kops-base-from-marker latest
++ [[ latest =~ ^https: ]]
++ [[ latest == \l\a\t\e\s\t ]]
++ curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
+ KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-75-g18cba87e91
... skipping 14 lines ...
I0811 01:54:37.843944    6420 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0811 01:54:37.843977    6420 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/49f221cb-1918-11ed-b2a2-1215444f8a61"
I0811 01:54:37.847960    6420 app.go:128] ID for this run: "49f221cb-1918-11ed-b2a2-1215444f8a61"
I0811 01:54:37.848032    6420 up.go:44] Cleaning up any leaked resources from previous cluster
I0811 01:54:37.848063    6420 dumplogs.go:45] /tmp/kops.PsuEJhXeJ toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0811 01:54:37.848104    6420 local.go:42] ⚙️ /tmp/kops.PsuEJhXeJ toolbox dump --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0811 01:54:38.305452    6420 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0811 01:54:38.305576    6420 down.go:48] /tmp/kops.PsuEJhXeJ delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0811 01:54:38.305596    6420 local.go:42] ⚙️ /tmp/kops.PsuEJhXeJ delete cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --yes
I0811 01:54:38.340485    6441 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 01:54:38.340586    6441 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" not found
I0811 01:54:38.794237    6420 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/11 01:54:38 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0811 01:54:38.807563    6420 http.go:37] curl https://ip.jsb.workers.dev
I0811 01:54:38.951425    6420 up.go:159] /tmp/kops.PsuEJhXeJ create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 34.134.251.70/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-1a --master-size c5.large
I0811 01:54:38.951466    6420 local.go:42] ⚙️ /tmp/kops.PsuEJhXeJ create cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.3 --ssh-public-key /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --admin-access 34.134.251.70/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-1a --master-size c5.large
I0811 01:54:38.981957    6452 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 01:54:38.982050    6452 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 01:54:39.001409    6452 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 515 lines ...
I0811 01:55:20.942418    6420 up.go:243] /tmp/kops.PsuEJhXeJ validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0811 01:55:20.942493    6420 local.go:42] ⚙️ /tmp/kops.PsuEJhXeJ validate cluster --name e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0811 01:55:20.973532    6492 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 01:55:20.973628    6492 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-ed4da97961-6b857.test-cncf-aws.k8s.io

W0811 01:55:21.970337    6492 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:55:32.019599    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:55:42.059061    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:55:52.124544    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:56:02.209241    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:56:12.244071    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:56:22.277071    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:56:32.313851    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:56:42.354446    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:56:52.408666    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:57:02.445544    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:57:12.495214    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:57:22.529556    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:57:32.565900    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:57:42.602344    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:57:52.640033    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:58:02.689997    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0811 01:58:12.724631    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 19 lines ...
Pod	kube-system/coredns-autoscaler-f85cf5c-hpstv	system-cluster-critical pod "coredns-autoscaler-f85cf5c-hpstv" is pending
Pod	kube-system/ebs-csi-node-6xdcf			system-node-critical pod "ebs-csi-node-6xdcf" is pending
Pod	kube-system/ebs-csi-node-qhc8m			system-node-critical pod "ebs-csi-node-qhc8m" is pending
Pod	kube-system/ebs-csi-node-tssv9			system-node-critical pod "ebs-csi-node-tssv9" is pending
Pod	kube-system/ebs-csi-node-twbqr			system-node-critical pod "ebs-csi-node-twbqr" is pending

Validation Failed
W0811 01:58:24.235610    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 15 lines ...
Pod	kube-system/calico-node-snjsp	system-node-critical pod "calico-node-snjsp" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-6xdcf	system-node-critical pod "ebs-csi-node-6xdcf" is pending
Pod	kube-system/ebs-csi-node-qhc8m	system-node-critical pod "ebs-csi-node-qhc8m" is pending
Pod	kube-system/ebs-csi-node-tssv9	system-node-critical pod "ebs-csi-node-tssv9" is pending
Pod	kube-system/ebs-csi-node-twbqr	system-node-critical pod "ebs-csi-node-twbqr" is pending

Validation Failed
W0811 01:58:36.296345    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 11 lines ...
Node	i-0f25594626efe0556		node "i-0f25594626efe0556" of role "node" is not ready
Pod	kube-system/calico-node-bc6pw	system-node-critical pod "calico-node-bc6pw" is pending
Pod	kube-system/calico-node-htr26	system-node-critical pod "calico-node-htr26" is pending
Pod	kube-system/ebs-csi-node-6xdcf	system-node-critical pod "ebs-csi-node-6xdcf" is pending
Pod	kube-system/ebs-csi-node-qhc8m	system-node-critical pod "ebs-csi-node-qhc8m" is pending

Validation Failed
W0811 01:58:47.299554    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 11 lines ...
Node	i-0f25594626efe0556		node "i-0f25594626efe0556" of role "node" is not ready
Pod	kube-system/calico-node-bc6pw	system-node-critical pod "calico-node-bc6pw" is not ready (calico-node)
Pod	kube-system/calico-node-htr26	system-node-critical pod "calico-node-htr26" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-6xdcf	system-node-critical pod "ebs-csi-node-6xdcf" is pending
Pod	kube-system/ebs-csi-node-qhc8m	system-node-critical pod "ebs-csi-node-qhc8m" is pending

Validation Failed
W0811 01:58:58.383860    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-bc6pw	system-node-critical pod "calico-node-bc6pw" is not ready (calico-node)
Pod	kube-system/calico-node-htr26	system-node-critical pod "calico-node-htr26" is not ready (calico-node)
Pod	kube-system/ebs-csi-node-6xdcf	system-node-critical pod "ebs-csi-node-6xdcf" is pending

Validation Failed
W0811 01:59:09.639448    6492 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 548 lines ...
evicting pod kube-system/calico-kube-controllers-75f4df896c-7jbvt
evicting pod kube-system/dns-controller-6684cc95dc-2xkcx
I0811 02:03:34.907084    6606 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0811 02:03:39.907979    6606 instancegroups.go:591] Stopping instance "i-0262c524a6b933090", node "i-0262c524a6b933090", in group "master-us-east-1a.masters.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0811 02:03:40.107145    6606 instancegroups.go:436] waiting for 15s after terminating instance
I0811 02:03:55.112836    6606 instancegroups.go:470] Validating the cluster.
I0811 02:03:55.200073    6606 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.234.54.180:443: connect: connection refused.
I0811 02:04:55.245954    6606 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.234.54.180:443: i/o timeout.
I0811 02:05:55.288162    6606 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.234.54.180:443: i/o timeout.
I0811 02:06:55.344184    6606 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.234.54.180:443: i/o timeout.
I0811 02:07:55.387119    6606 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 54.234.54.180:443: i/o timeout.
I0811 02:08:41.910041    6606 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-0207f564b370d80bd" of role "node" is not ready, node "i-023e0e791e0e7b0f4" of role "node" is not ready, node "i-03890ae20314ee8f5" of role "node" is not ready, node "i-0f25594626efe0556" of role "node" is not ready, system-node-critical pod "calico-node-khkjd" is not ready (calico-node), system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-ggr2d" is not ready (ebs-plugin), system-cluster-critical pod "etcd-manager-events-i-01bb4d7889ed9bfc5" is pending, system-cluster-critical pod "kube-controller-manager-i-01bb4d7889ed9bfc5" is pending, master "i-01bb4d7889ed9bfc5" is missing kube-controller-manager pod.
I0811 02:09:13.149569    6606 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-6q8g5" is not ready (calico-kube-controllers), system-cluster-critical pod "ebs-csi-controller-5b4bd8874c-wfz6n" is pending.
I0811 02:09:44.356625    6606 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-6q8g5" is not ready (calico-kube-controllers).
I0811 02:10:15.476007    6606 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-6q8g5" is not ready (calico-kube-controllers).
I0811 02:10:46.566736    6606 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-75f4df896c-6q8g5" is not ready (calico-kube-controllers).
I0811 02:11:17.825597    6606 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 12 lines ...
I0811 02:14:02.827973    6606 instancegroups.go:503] Cluster validated.
I0811 02:14:02.828041    6606 instancegroups.go:400] Draining the node: "i-0207f564b370d80bd".
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-snjsp, kube-system/ebs-csi-node-tssv9
evicting pod kube-system/coredns-autoscaler-f85cf5c-hpstv
evicting pod kube-system/coredns-5c44b6cf7d-dmfp6
evicting pod kube-system/coredns-5c44b6cf7d-k2cpt
error when evicting pods/"coredns-5c44b6cf7d-dmfp6" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-5c44b6cf7d-dmfp6
I0811 02:14:14.829421    6606 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0811 02:14:19.829551    6606 instancegroups.go:591] Stopping instance "i-0207f564b370d80bd", node "i-0207f564b370d80bd", in group "nodes-us-east-1a.e2e-ed4da97961-6b857.test-cncf-aws.k8s.io" (this may take a while).
I0811 02:14:20.027441    6606 instancegroups.go:436] waiting for 15s after terminating instance
I0811 02:14:35.027995    6606 instancegroups.go:470] Validating the cluster.
I0811 02:14:36.541982    6606 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": InstanceGroup "nodes-us-east-1a" did not have enough nodes 3 vs 4.
... skipping 85 lines ...
I0811 02:27:02.181004    6654 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/49f221cb-1918-11ed-b2a2-1215444f8a61"
I0811 02:27:02.185725    6654 app.go:128] ID for this run: "49f221cb-1918-11ed-b2a2-1215444f8a61"
I0811 02:27:02.185768    6654 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-kops --test-package-version=https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.12+f5956716e3a92f --parallel 25
I0811 02:27:02.205277    6672 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 02:27:02.212900    6672 kubectl.go:148] gsutil cp gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.12+f5956716e3a92f/kubernetes-client-linux-amd64.tar.gz /root/.cache/kubernetes-client-linux-amd64.tar.gz
CommandException: No URLs matched: gs://kubernetes-release/release/https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.12+f5956716e3a92f/kubernetes-client-linux-amd64.tar.gz
F0811 02:27:03.953994    6672 tester.go:482] failed to run ginkgo tester: failed to get kubectl package from published releases: failed to download release tar kubernetes-client-linux-amd64.tar.gz for release https://storage.googleapis.com/k8s-release-dev/ci/v1.26.0-alpha.0.12+f5956716e3a92f: exit status 1
Error: exit status 255
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-ed4da97961-6b857.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.CfrBkzusN --down
I0811 02:27:03.988189    6861 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0811 02:27:03.990559    6861 app.go:61] The files in RunDir shall not be part of Artifacts
I0811 02:27:03.990593    6861 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I0811 02:27:03.990630    6861 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/49f221cb-1918-11ed-b2a2-1215444f8a61"
... skipping 308 lines ...