This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhakman: Enable cross-subnet mode with Calico by default
ResultABORTED
Tests 0 failed / 0 succeeded
Started2021-06-21 05:24
Elapsed41m52s
Revision8211ab8f113f52d7c3eddaae0466763d55230a2e
Refs 11810

No Test Failures!


Error lines from build-log.txt

... skipping 487 lines ...
I0621 05:29:20.676968    4233 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0621 05:29:20.691434   11874 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0621 05:29:20.691528   11874 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
W0621 05:29:30.131801   11874 toolbox_dump.go:171] cannot load kubecfg settings for "e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io": context "e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io" does not exist
2021/06/21 05:29:30 dumping node not registered in kubernetes: 52.59.188.191
2021/06/21 05:29:30 Dumping node 52.59.188.191
2021/06/21 05:31:39 error dumping node 52.59.188.191: could not connect: unable to SSH to "52.59.188.191": dial tcp 52.59.188.191:22: connect: connection timed out
2021/06/21 05:31:39 dumping node not registered in kubernetes: 3.68.84.152
2021/06/21 05:31:39 Dumping node 3.68.84.152
2021/06/21 05:33:50 error dumping node 3.68.84.152: could not connect: unable to SSH to "3.68.84.152": dial tcp 3.68.84.152:22: connect: connection timed out
2021/06/21 05:33:50 dumping node not registered in kubernetes: 18.192.25.198
2021/06/21 05:33:50 Dumping node 18.192.25.198
2021/06/21 05:36:01 error dumping node 18.192.25.198: could not connect: unable to SSH to "18.192.25.198": dial tcp 18.192.25.198:22: connect: connection timed out
2021/06/21 05:36:01 dumping node not registered in kubernetes: 3.120.173.123
2021/06/21 05:36:01 Dumping node 3.120.173.123
2021/06/21 05:38:12 error dumping node 3.120.173.123: could not connect: unable to SSH to "3.120.173.123": dial tcp 3.120.173.123:22: connect: connection timed out
2021/06/21 05:38:12 dumping node not registered in kubernetes: 18.197.157.98
2021/06/21 05:38:12 Dumping node 18.197.157.98
2021/06/21 05:40:23 error dumping node 18.197.157.98: could not connect: unable to SSH to "18.197.157.98": dial tcp 18.197.157.98:22: connect: connection timed out
instances:
- name: i-02d9e2f9d26ba49d7
  publicAddresses:
  - 52.59.188.191
  roles:
  - node
... skipping 1042 lines ...
  zone: eu-central-1a
vpc:
  id: vpc-0f79c04c0a293fc77
I0621 05:40:23.656036    4233 dumplogs.go:70] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops get cluster --name e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io -o yaml
I0621 05:40:24.279197    4233 dumplogs.go:70] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops get instancegroups --name e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io -o yaml
I0621 05:40:25.177232    4233 dumplogs.go:89] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
W0621 05:40:25.344160    4233 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0621 05:40:25.344206    4233 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io --yes
I0621 05:40:25.379292   11914 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0621 05:40:25.379382   11914 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
TYPE			NAME													ID
autoscaling-config	master-eu-central-1a.masters.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io					lt-034fbb487273a7d93
autoscaling-config	nodes-eu-central-1a.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io						lt-0e6be5bcf1bdf6af2
... skipping 752 lines ...
route-table:rtb-06bfc3d6c3bc4af2e	ok
vpc:vpc-0f79c04c0a293fc77	ok
dhcp-options:dopt-06af4a473e5dd881e	ok

Deleted cluster: "e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io"
I0621 05:44:31.301095    4233 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/06/21 05:44:31 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0621 05:44:31.313184    4233 http.go:37] curl https://ip.jsb.workers.dev
I0621 05:44:31.404178    4233 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210610 --channel=alpha --networking=weave --container-runtime=containerd --node-size=t3.large --admin-access 34.66.90.44/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0621 05:44:31.420638   11924 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0621 05:44:31.420748   11924 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0621 05:44:31.470202   11924 create_cluster.go:748] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0621 05:44:32.038822   11924 new_cluster.go:1054]  Cloud Provider ID = aws
... skipping 41 lines ...

I0621 05:44:57.425069    4233 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0621 05:44:57.441928   11943 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0621 05:44:57.442043   11943 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io

W0621 05:44:58.689627   11943 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:45:08.720776   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:45:18.757331   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
W0621 05:45:28.787900   11943 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0621 05:45:38.837075   11943 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0621 05:45:48.871658   11943 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:45:58.935479   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:46:08.967402   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:46:19.000310   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:46:29.031590   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:46:39.064578   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
W0621 05:46:49.084426   11943 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:46:59.114625   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
W0621 05:47:09.135549   11943 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:47:19.174198   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:47:29.209829   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:47:39.262591   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:47:49.301845   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:47:59.347291   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:48:09.377226   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:48:19.423974   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:48:29.454798   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0621 05:48:39.486135   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 8 lines ...
Machine	i-0c2dc15c4b843a2f8				machine "i-0c2dc15c4b843a2f8" has not yet joined cluster
Machine	i-0f26edc466841e627				machine "i-0f26edc466841e627" has not yet joined cluster
Node	ip-172-20-55-136.eu-west-2.compute.internal	master "ip-172-20-55-136.eu-west-2.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-autoscaler-6f594f4c58-889qw	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-889qw" is pending
Pod	kube-system/coredns-f45c4bf76-qmhlr		system-cluster-critical pod "coredns-f45c4bf76-qmhlr" is pending

Validation Failed
W0621 05:48:52.131356   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 8 lines ...
Machine	i-0c2dc15c4b843a2f8				machine "i-0c2dc15c4b843a2f8" has not yet joined cluster
Machine	i-0f26edc466841e627				machine "i-0f26edc466841e627" has not yet joined cluster
Node	ip-172-20-55-136.eu-west-2.compute.internal	master "ip-172-20-55-136.eu-west-2.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-autoscaler-6f594f4c58-889qw	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-889qw" is pending
Pod	kube-system/coredns-f45c4bf76-qmhlr		system-cluster-critical pod "coredns-f45c4bf76-qmhlr" is pending

Validation Failed
W0621 05:49:03.912498   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 14 lines ...
Pod	kube-system/coredns-f45c4bf76-qmhlr					system-cluster-critical pod "coredns-f45c4bf76-qmhlr" is pending
Pod	kube-system/kube-scheduler-ip-172-20-55-136.eu-west-2.compute.internal	system-cluster-critical pod "kube-scheduler-ip-172-20-55-136.eu-west-2.compute.internal" is pending
Pod	kube-system/weave-net-7gbnh						system-node-critical pod "weave-net-7gbnh" is pending
Pod	kube-system/weave-net-hdvnl						system-node-critical pod "weave-net-hdvnl" is pending
Pod	kube-system/weave-net-hrnxb						system-node-critical pod "weave-net-hrnxb" is pending

Validation Failed
W0621 05:49:15.721470   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-202.eu-west-2.compute.internal	node "ip-172-20-37-202.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-f45c4bf76-9655t		system-cluster-critical pod "coredns-f45c4bf76-9655t" is pending
Pod	kube-system/weave-net-9hjps			system-node-critical pod "weave-net-9hjps" is pending

Validation Failed
W0621 05:49:27.492120   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 36 lines ...
ip-172-20-55-136.eu-west-2.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-45-203.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-45-203.eu-west-2.compute.internal" is pending

Validation Failed
W0621 05:50:03.020962   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-40-246.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-40-246.eu-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-47-88.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-47-88.eu-west-2.compute.internal" is pending

Validation Failed
W0621 05:50:14.801998   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-55-136.eu-west-2.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-37-202.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-37-202.eu-west-2.compute.internal" is pending

Validation Failed
W0621 05:50:26.672831   11943 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 79 lines ...
ip-172-20-45-203.eu-west-2.compute.internal	node	True
ip-172-20-47-88.eu-west-2.compute.internal	node	True
ip-172-20-55-136.eu-west-2.compute.internal	master	True

Your cluster e2e-ebdc056de8-2a8bf.test-cncf-aws.k8s.io is ready
I0621 05:51:38.156965   11943 validate_cluster.go:209] (will retry): cluster passed validation 6 consecutive times
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-06-21T05:51:42Z"}