This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-16 14:25
Elapsed47m41s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 175 lines ...
I1016 14:26:14.071922    4979 dumplogs.go:40] /tmp/kops.GHW6xHmgu toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1016 14:26:14.096610    4990 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 14:26:14.096704    4990 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 14:26:14.096709    4990 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
W1016 14:26:14.614193    4979 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1016 14:26:14.614292    4979 down.go:48] /tmp/kops.GHW6xHmgu delete cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --yes
I1016 14:26:14.639355    5001 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 14:26:14.640395    5001 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 14:26:14.640410    5001 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" not found
I1016 14:26:15.151804    4979 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/16 14:26:15 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1016 14:26:15.160935    4979 http.go:37] curl https://ip.jsb.workers.dev
I1016 14:26:15.286518    4979 up.go:144] /tmp/kops.GHW6xHmgu create cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.21.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --networking calico --admin-access 104.154.21.16/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I1016 14:26:15.307983    5012 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 14:26:15.308089    5012 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 14:26:15.308095    5012 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 14:26:15.356700    5012 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 44 lines ...
I1016 14:26:42.427251    4979 up.go:181] /tmp/kops.GHW6xHmgu validate cluster --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1016 14:26:42.443559    5033 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1016 14:26:42.443664    5033 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1016 14:26:42.443669    5033 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-89d41a7532-e6156.test-cncf-aws.k8s.io

W1016 14:26:43.731711    5033 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:26:53.764143    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:27:03.796837    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:27:13.841159    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:27:23.875012    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:27:33.910924    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:27:43.947156    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:27:53.980057    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
W1016 14:28:04.013416    5033 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:28:14.052817    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:28:24.089896    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:28:34.123515    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:28:44.154759    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:28:54.188429    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:29:04.225939    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:29:14.263477    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:29:24.293114    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:29:34.332073    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:29:44.398984    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:29:54.429194    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:30:04.461238    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:30:14.495260    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:30:24.527240    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:30:34.562551    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:30:44.597490    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1016 14:30:54.629249    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 8 lines ...
Machine	i-09ced321282332d11					machine "i-09ced321282332d11" has not yet joined cluster
Machine	i-0c0b8515040c5b80e					machine "i-0c0b8515040c5b80e" has not yet joined cluster
Pod	kube-system/calico-kube-controllers-6b59cd85f8-tg29k	system-cluster-critical pod "calico-kube-controllers-6b59cd85f8-tg29k" is pending
Pod	kube-system/coredns-5dc785954d-7npc7			system-cluster-critical pod "coredns-5dc785954d-7npc7" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-7zqnv		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-7zqnv" is pending

Validation Failed
W1016 14:31:07.643903    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-kf8d2						system-node-critical pod "calico-node-kf8d2" is pending
Pod	kube-system/calico-node-xl6jd						system-node-critical pod "calico-node-xl6jd" is pending
Pod	kube-system/coredns-5dc785954d-7npc7					system-cluster-critical pod "coredns-5dc785954d-7npc7" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-7zqnv				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-7zqnv" is pending
Pod	kube-system/kube-proxy-ip-172-20-32-25.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-32-25.eu-central-1.compute.internal" is pending

Validation Failed
W1016 14:31:19.683043    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 16 lines ...
Pod	kube-system/calico-node-kf8d2						system-node-critical pod "calico-node-kf8d2" is pending
Pod	kube-system/calico-node-xl6jd						system-node-critical pod "calico-node-xl6jd" is pending
Pod	kube-system/coredns-5dc785954d-7npc7					system-cluster-critical pod "coredns-5dc785954d-7npc7" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-7zqnv				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-7zqnv" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-224.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-224.eu-central-1.compute.internal" is pending

Validation Failed
W1016 14:31:31.675763    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/calico-node-k928h			system-node-critical pod "calico-node-k928h" is pending
Pod	kube-system/calico-node-kf8d2			system-node-critical pod "calico-node-kf8d2" is not ready (calico-node)
Pod	kube-system/calico-node-xl6jd			system-node-critical pod "calico-node-xl6jd" is not ready (calico-node)
Pod	kube-system/coredns-5dc785954d-7npc7		system-cluster-critical pod "coredns-5dc785954d-7npc7" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-7zqnv	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-7zqnv" is pending

Validation Failed
W1016 14:31:43.733215    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 6 lines ...
ip-172-20-63-239.eu-central-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/calico-node-k928h	system-node-critical pod "calico-node-k928h" is not ready (calico-node)

Validation Failed
W1016 14:31:55.731270    5033 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 799 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 294 lines ...
  	                    	        for cmd in "${commands[@]}"; do
  	                    	...
  	                    	            continue
  	                    	          fi
  	                    	+         if ! validate-hash "${file}" "${hash}"; then
  	                    	-         if [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
  	                    	            echo "== Hash validation of ${url} failed. Retrying. =="
  	                    	            rm -f "${file}"
  	                    	          else
  	                    	-           if [[ -n "${hash}" ]]; then
  	                    	+           echo "== Downloaded ${url} (SHA256 = ${hash}) =="
  	                    	-             echo "== Downloaded ${url} (SHA1 = ${hash}) =="
  	                    	-           else
... skipping 8838 lines ...
evicting pod kube-system/dns-controller-7cf9d66d6d-bgr5h
I1016 14:36:17.777467    5158 instancegroups.go:658] Waiting for 5s for pods to stabilize after draining.
I1016 14:36:22.777867    5158 instancegroups.go:417] deleting node "ip-172-20-61-207.eu-central-1.compute.internal" from kubernetes
I1016 14:36:22.896264    5158 instancegroups.go:591] Stopping instance "i-0294039e1b923175b", node "ip-172-20-61-207.eu-central-1.compute.internal", in group "master-eu-central-1a.masters.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io" (this may take a while).
I1016 14:36:23.110619    5158 instancegroups.go:435] waiting for 15s after terminating instance
I1016 14:36:38.112595    5158 instancegroups.go:470] Validating the cluster.
I1016 14:37:08.143818    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:38:08.176810    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:39:08.211973    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:40:08.248296    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:41:08.288023    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:42:08.327170    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:43:08.364825    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:44:08.434170    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:45:08.486698    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:46:08.533254    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:47:08.570938    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:48:08.610623    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:49:08.646596    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:50:08.680089    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:51:08.716510    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:52:08.756599    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:53:08.803179    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:54:08.844068    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:55:08.877637    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:56:08.918310    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:57:08.962266    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:58:08.996456    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 14:59:09.047815    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 15:00:09.100424    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 15:01:09.151020    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 15:02:09.188459    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 15:03:09.222384    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 15:04:09.256747    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.157.86.235:443: i/o timeout.
I1016 15:04:39.550450    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": x509: certificate has expired or is not yet valid: current time 2021-10-16T15:04:39Z is after 2019-10-12T05:24:07Z.
I1016 15:05:09.866246    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": x509: certificate has expired or is not yet valid: current time 2021-10-16T15:05:09Z is after 2019-10-12T05:24:07Z.
I1016 15:05:40.137290    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": x509: certificate has expired or is not yet valid: current time 2021-10-16T15:05:40Z is after 2019-10-12T05:24:07Z.
I1016 15:06:10.419153    5158 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": x509: certificate has expired or is not yet valid: current time 2021-10-16T15:06:10Z is after 2019-10-12T05:24:07Z.
I1016 15:06:40.694545    5158 instancegroups.go:513] Cluster did not validate within deadline: error listing nodes: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": x509: certificate has expired or is not yet valid: current time 2021-10-16T15:06:40Z is after 2019-10-12T05:24:07Z.
E1016 15:06:40.694603    5158 instancegroups.go:475] Cluster did not validate within 30m0s
Error: master not healthy after update, stopping rolling-update: "error validating cluster after terminating instance: cluster did not validate within a duration of \"30m0s\""
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.wjJ7JvQjO --down
I1016 15:06:40.728921    5179 app.go:59] RunDir for this run: "/logs/artifacts/cb5e02cc-2e8c-11ec-8a05-6acfde499713"
I1016 15:06:40.729878    5179 app.go:90] ID for this run: "cb5e02cc-2e8c-11ec-8a05-6acfde499713"
I1016 15:06:40.729925    5179 dumplogs.go:40] /tmp/kops.wjJ7JvQjO toolbox dump --name e2e-89d41a7532-e6156.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1016 15:06:40.750671    5189 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1016 15:06:40.751649    5189 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1016 15:06:40.751681    5189 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
W1016 15:06:50.564462    5189 toolbox_dump.go:168] error listing nodes in cluster: Get "https://api.e2e-89d41a7532-e6156.test-cncf-aws.k8s.io/api/v1/nodes": x509: certificate has expired or is not yet valid: current time 2021-10-16T15:06:50Z is after 2019-10-12T05:24:07Z
2021/10/16 15:06:50 dumping node not registered in kubernetes: 3.120.183.226
2021/10/16 15:06:50 Dumping node 3.120.183.226
2021/10/16 15:06:57 dumping node not registered in kubernetes: 18.197.94.211
2021/10/16 15:06:57 Dumping node 18.197.94.211
2021/10/16 15:07:02 dumping node not registered in kubernetes: 3.70.157.123
2021/10/16 15:07:02 Dumping node 3.70.157.123
... skipping 1503 lines ...