This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-06-16 16:11
Elapsed41m23s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 125 lines ...
I0616 16:12:31.592221    4057 up.go:43] Cleaning up any leaked resources from previous cluster
I0616 16:12:31.592262    4057 dumplogs.go:38] /logs/artifacts/80b42128-cebd-11eb-8a45-f2d1012e3238/kops toolbox dump --name e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0616 16:12:31.609023    4075 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0616 16:12:31.609147    4075 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-9c20857a72-da63e.test-cncf-aws.k8s.io" not found
W0616 16:12:32.122369    4057 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0616 16:12:32.122432    4057 down.go:48] /logs/artifacts/80b42128-cebd-11eb-8a45-f2d1012e3238/kops delete cluster --name e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --yes
I0616 16:12:32.135664    4085 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0616 16:12:32.136288    4085 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-9c20857a72-da63e.test-cncf-aws.k8s.io" not found
I0616 16:12:32.667859    4057 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/06/16 16:12:32 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0616 16:12:32.681301    4057 http.go:37] curl https://ip.jsb.workers.dev
I0616 16:12:32.952260    4057 up.go:144] /logs/artifacts/80b42128-cebd-11eb-8a45-f2d1012e3238/kops create cluster --name e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210610 --channel=alpha --networking=flannel --container-runtime=docker --admin-access 34.70.163.3/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I0616 16:12:32.966095    4095 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0616 16:12:32.966185    4095 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0616 16:12:33.015718    4095 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0616 16:12:33.509526    4095 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0616 16:13:00.630677    4057 up.go:181] /logs/artifacts/80b42128-cebd-11eb-8a45-f2d1012e3238/kops validate cluster --name e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0616 16:13:00.646621    4115 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0616 16:13:00.647001    4115 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-9c20857a72-da63e.test-cncf-aws.k8s.io

W0616 16:13:02.141070    4115 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:13:12.181716    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:13:28.170532    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:13:38.205394    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:13:48.238564    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:13:58.309902    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:14:08.346549    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:14:18.400137    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:14:28.431451    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:14:38.477159    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:14:48.542681    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:14:58.573163    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:15:08.612265    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:15:18.663152    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:15:28.696364    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:15:38.743832    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:15:48.799506    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:15:58.831189    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:16:08.884910    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:16:18.919929    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:16:28.960588    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0616 16:16:38.995956    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 8 lines ...
Machine	i-0dad3db7ab8fffc94				machine "i-0dad3db7ab8fffc94" has not yet joined cluster
Machine	i-0fc5e8c67e9c9c510				machine "i-0fc5e8c67e9c9c510" has not yet joined cluster
Node	ip-172-20-37-218.eu-west-1.compute.internal	master "ip-172-20-37-218.eu-west-1.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-autoscaler-6f594f4c58-cds4r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-cds4r" is pending
Pod	kube-system/coredns-f45c4bf76-45v5f		system-cluster-critical pod "coredns-f45c4bf76-45v5f" is pending

Validation Failed
W0616 16:16:51.819461    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 8 lines ...
Machine	i-0dad3db7ab8fffc94				machine "i-0dad3db7ab8fffc94" has not yet joined cluster
Machine	i-0fc5e8c67e9c9c510				machine "i-0fc5e8c67e9c9c510" has not yet joined cluster
Node	ip-172-20-37-218.eu-west-1.compute.internal	master "ip-172-20-37-218.eu-west-1.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-autoscaler-6f594f4c58-cds4r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-cds4r" is pending
Pod	kube-system/coredns-f45c4bf76-45v5f		system-cluster-critical pod "coredns-f45c4bf76-45v5f" is pending

Validation Failed
W0616 16:17:03.697855    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 7 lines ...
Machine	i-0af2940a64fcdbebc				machine "i-0af2940a64fcdbebc" has not yet joined cluster
Machine	i-0dad3db7ab8fffc94				machine "i-0dad3db7ab8fffc94" has not yet joined cluster
Machine	i-0fc5e8c67e9c9c510				machine "i-0fc5e8c67e9c9c510" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-cds4r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-cds4r" is pending
Pod	kube-system/coredns-f45c4bf76-45v5f		system-cluster-critical pod "coredns-f45c4bf76-45v5f" is pending

Validation Failed
W0616 16:17:15.706301    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 13 lines ...
Node	ip-172-20-62-139.eu-west-1.compute.internal	node "ip-172-20-62-139.eu-west-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-autoscaler-6f594f4c58-cds4r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-cds4r" is pending
Pod	kube-system/coredns-f45c4bf76-45v5f		system-cluster-critical pod "coredns-f45c4bf76-45v5f" is pending
Pod	kube-system/kube-flannel-ds-mchp6		system-node-critical pod "kube-flannel-ds-mchp6" is pending
Pod	kube-system/kube-flannel-ds-xktxj		system-node-critical pod "kube-flannel-ds-xktxj" is pending

Validation Failed
W0616 16:17:27.601475    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 37 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal" is pending

Validation Failed
W0616 16:18:03.520386    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal" is pending

Validation Failed
W0616 16:18:15.475860    4115 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 774 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 190 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 16 16:20:47.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8565" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 16 16:20:48.291: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 39 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Jun 16 16:20:44.792: INFO: Running '/tmp/kubectl3482264644/kubectl --server=https://api.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8531 cluster-info dump'
Jun 16 16:20:50.300: INFO: stderr: ""
Jun 16 16:20:50.301: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1418\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                \"resourceVersion\": \"552\",\n                \"creationTimestamp\": \"2021-06-16T16:15:26Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-1a\",\n                    \"kops.k8s.io/instancegroup\": \"master-eu-west-1a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"eu-west-1\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"a6:76:49:29:1a:cb\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.37.218\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-1a/i-0a4ab1b5032c4d5e7\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3784328Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3681928Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:16:19Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:16:19Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:16:36Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:15:18Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:16:36Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:15:18Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:16:36Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:15:18Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:16:36Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:16:26Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.37.218\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.243.197.33\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-34-243-197-33.eu-west-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2fb2a9268e6659a9696455e6509bc7\",\n                    \"systemUUID\": \"ec2fb2a9-268e-6659-a969-6455e6509bc7\",\n                    \"bootID\": \"81f3d493-0900-4ca8-8643-f1aab08c48e4\",\n                    \"kernelVersion\": \"5.8.0-1035-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\"\n                        ],\n                        \"sizeBytes\": 492748624\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 125612423\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 119825302\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\"\n                        ],\n                        \"sizeBytes\": 112242860\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.3\"\n                        ],\n                        \"sizeBytes\": 110449752\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 50635642\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.3\"\n                        ],\n                        \"sizeBytes\": 24015926\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"uid\": \"84635025-912e-4be1-8cfc-71f4d38e0da1\",\n                \"resourceVersion\": \"846\",\n                \"creationTimestamp\": \"2021-06-16T16:17:19Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-52-203.eu-west-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-1\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"96:4e:44:ea:b3:48\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.52.203\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-1a/i-014525e071afabf89\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:27Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:27Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:49Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:19Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:49Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:19Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:49Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:19Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:49Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.52.203\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"52.51.66.205\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-52-51-66-205.eu-west-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2f4850881de5cd15ce1a83542727a6\",\n                    \"systemUUID\": \"ec2f4850-881d-e5cd-15ce-1a83542727a6\",\n                    \"bootID\": \"5fce2a53-9135-4f00-a7a7-85844e34314f\",\n                    \"kernelVersion\": \"5.8.0-1035-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"uid\": \"04578755-f5c3-4c9b-8788-79a1117f5bd2\",\n                \"resourceVersion\": \"831\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-57-162.eu-west-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-1\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"5e:5b:8b:4e:e6:8a\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.57.162\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-1a/i-0fc5e8c67e9c9c510\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968652Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866252Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:25Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:25Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.57.162\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"3.249.232.131\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-3-249-232-131.eu-west-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec22d29048c3d3b9e7669214a34c9d66\",\n                    \"systemUUID\": \"ec22d290-48c3-d3b9-e766-9214a34c9d66\",\n                    \"bootID\": \"c68fd3fc-f9b8-4d5b-bd6f-5ed69aca1794\",\n                    \"kernelVersion\": \"5.8.0-1035-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 43499235\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"uid\": \"7300abdb-83fe-49b3-83a9-eb4c8a1fdeda\",\n                \"resourceVersion\": \"835\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-1\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"ce:cc:f6:a3:12:00\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.58.250\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-1a/i-0af2940a64fcdbebc\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:24Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:24Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:46Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.58.250\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"3.249.4.52\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-3-249-4-52.eu-west-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec283505842cee716f7906839b69833a\",\n                    \"systemUUID\": \"ec283505-842c-ee71-6f79-06839b69833a\",\n                    \"bootID\": \"6b724a5a-b8d8-4a83-aa05-2b71d8c2d00d\",\n                    \"kernelVersion\": \"5.8.0-1035-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 43499235\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\"\n                        ],\n                        \"sizeBytes\": 40647382\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"uid\": \"0c7ebb44-f33c-43ea-a856-11a5ed1b8f99\",\n                \"resourceVersion\": \"854\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-62-139.eu-west-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-1\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-1a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"fe:49:7b:39:ac:8d\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.62.139\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-1a/i-0dad3db7ab8fffc94\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:30Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:30Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:52Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:22Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:52Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:22Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:52Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:22Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-06-16T16:17:52Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:32Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.62.139\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.245.105.52\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-34-245-105-52.eu-west-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec24a748ab079ed131419b8b6cda6814\",\n                    \"systemUUID\": \"ec24a748-ab07-9ed1-3141-9b8b6cda6814\",\n                    \"bootID\": \"7389e575-846a-434a-9f91-232f83db7f29\",\n                    \"kernelVersion\": \"5.8.0-1035-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"258\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c329dabdfda\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f035a8f8-47c8-4049-9be4-9334b47e1b07\",\n                \"resourceVersion\": \"81\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"409\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:35Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c47fb1bd841\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bd100e2e-4db8-4a01-ba8d-0d271c75adaf\",\n                \"resourceVersion\": \"84\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"435\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c4a76a73f1a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"14754083-a53f-4d35-8508-6b159fa01f32\",\n                \"resourceVersion\": \"214\",\n                \"creationTimestamp\": \"2021-06-16T16:17:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"647\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-6f594f4c58-cds4r to ip-172-20-58-250.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c4a9e7b6f0a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3677da28-34df-4e15-abe6-400e82e06acd\",\n                \"resourceVersion\": \"231\",\n                \"creationTimestamp\": \"2021-06-16T16:17:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"731\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:27Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c4b02686238\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"02e26abd-988b-417f-abd7-36c83d82ef7e\",\n                \"resourceVersion\": \"236\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"731\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\" in 1.676454099s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c4b095e344a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d98cf103-5711-420a-80d9-eed9cbf07d1d\",\n                \"resourceVersion\": \"239\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"731\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r.16891c4b0f9d34fc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a1b61177-377f-405b-9c84-347356eebaff\",\n                \"resourceVersion\": \"241\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"731\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58.16891c32959997ea\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dd3fdff6-cd2f-4b20-9de7-5f243cd52bb0\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"uid\": \"25def703-f6f8-44cd-be46-681181148c6f\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"394\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-6f594f4c58-cds4r\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16891c328ed3c684\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7c9fff56-ce46-4750-9f37-16e22ea37f85\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"7e20abdb-1622-43ff-9a14-1b0eb5c2a304\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"320\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-6f594f4c58 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c3296a20003\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"be423f70-8ccc-4e6c-bae5-560bf10e4fe4\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"408\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:35Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c47f9db41c3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3ee87786-d99e-434d-a8ea-76b187b7df44\",\n                \"resourceVersion\": \"82\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"427\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c4a76bf5451\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e4a2f6ec-3b5c-4367-bf33-fc8a9d0b3f2c\",\n                \"resourceVersion\": \"216\",\n                \"creationTimestamp\": \"2021-06-16T16:17:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"643\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-45v5f to ip-172-20-57-162.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c4aa0c6f81d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7d077941-ce87-4f07-8218-81dfbcd0628b\",\n                \"resourceVersion\": \"223\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"732\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:27Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c4aff726a5f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7b18a5d7-a905-4a0a-a6c3-5f02e99e326e\",\n                \"resourceVersion\": \"235\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\"\n            },\n            \"reason\": \"TaintManagerEviction\",\n            \"message\": \"Cancelling deletion of Pod kube-system/coredns-f45c4bf76-45v5f\",\n            \"source\": {\n                \"component\": \"taint-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c4b4a7780cb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b3c7bbdf-8fdd-478c-a6cf-a2e50b86e445\",\n                \"resourceVersion\": \"247\",\n                \"creationTimestamp\": \"2021-06-16T16:17:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"732\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 2.846899624s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c4b516ec776\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"14532cfd-e703-470f-b568-b76259e4560b\",\n                \"resourceVersion\": \"248\",\n                \"creationTimestamp\": \"2021-06-16T16:17:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"732\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f.16891c4b5730458e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"46b04377-e9b8-4459-b065-1bef3c986c92\",\n                \"resourceVersion\": \"249\",\n                \"creationTimestamp\": \"2021-06-16T16:17:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"732\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hf5wf.16891c4b20ab6128\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a51b5628-aee4-4da3-a5a8-6c5118a3411a\",\n                \"resourceVersion\": \"243\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hf5wf\",\n                \"uid\": \"987c7d49-1b9e-4048-a3ff-3cd241bcc6d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"756\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-hf5wf to ip-172-20-58-250.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hf5wf.16891c4b4711fa1d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f1117cbd-2471-4c99-b25a-23d4bd2d0123\",\n                \"resourceVersion\": \"246\",\n                \"creationTimestamp\": \"2021-06-16T16:17:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hf5wf\",\n                \"uid\": \"987c7d49-1b9e-4048-a3ff-3cd241bcc6d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"760\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hf5wf.16891c4bdd6c7f5e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"20092b7c-723b-4161-84ed-173c28d19aca\",\n                \"resourceVersion\": \"251\",\n                \"creationTimestamp\": \"2021-06-16T16:17:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hf5wf\",\n                \"uid\": \"987c7d49-1b9e-4048-a3ff-3cd241bcc6d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"760\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 2.522492727s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:32Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hf5wf.16891c4be4ee6f1c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8c793962-f9b2-4d40-a405-2cc80fcc4983\",\n                \"resourceVersion\": \"252\",\n                \"creationTimestamp\": \"2021-06-16T16:17:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hf5wf\",\n                \"uid\": \"987c7d49-1b9e-4048-a3ff-3cd241bcc6d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"760\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:32Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hf5wf.16891c4bea870e52\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7114bdbc-a94e-4be7-af91-a0ee88d43323\",\n                \"resourceVersion\": \"253\",\n                \"creationTimestamp\": \"2021-06-16T16:17:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hf5wf\",\n                \"uid\": \"987c7d49-1b9e-4048-a3ff-3cd241bcc6d6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"760\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:33Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.16891c329a74faea\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"957d0e5c-8ab7-4dc1-836b-1f87406e9604\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"5dccc43b-c547-4563-9b9d-1f2d22c48fe7\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"396\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-45v5f\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.16891c4b202ffcc4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"76006ffe-f3a9-465f-a183-b3a2bb68f1fd\",\n                \"resourceVersion\": \"244\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"5dccc43b-c547-4563-9b9d-1f2d22c48fe7\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"755\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-hf5wf\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16891c328eb100b1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"63a387a2-1e99-4fb9-8a8f-b2e55c97d70b\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"6a72b885-f8eb-4dd7-89fb-bdb03dda727d\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"312\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16891c4b1fc02c55\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c745d54e-66cf-4e93-8386-b1671a60af19\",\n                \"resourceVersion\": \"242\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"6a72b885-f8eb-4dd7-89fb-bdb03dda727d\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"754\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758-gn5kz.16891c329a8af49e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"21412547-6356-4304-9c4c-a7c7695aea4e\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-76c6c8f758-gn5kz\",\n                \"uid\": \"ccceda86-a9ea-4ce9-8d16-9e9c17aff5e3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"407\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-76c6c8f758-gn5kz to ip-172-20-37-218.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758-gn5kz.16891c3964df36a0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7a62f02b-653c-4bc9-b740-4218619fcb54\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-06-16T16:16:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-76c6c8f758-gn5kz\",\n                \"uid\": \"ccceda86-a9ea-4ce9-8d16-9e9c17aff5e3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"414\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758-gn5kz.16891c3969358e5d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d474894b-189e-4486-9b24-02b5299deb00\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-06-16T16:16:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-76c6c8f758-gn5kz\",\n                \"uid\": \"ccceda86-a9ea-4ce9-8d16-9e9c17aff5e3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"414\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758-gn5kz.16891c396ebfdbb2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"99bbc97e-5383-41d1-9f20-948889e739e8\",\n                \"resourceVersion\": \"63\",\n                \"creationTimestamp\": \"2021-06-16T16:16:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-76c6c8f758-gn5kz\",\n                \"uid\": \"ccceda86-a9ea-4ce9-8d16-9e9c17aff5e3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"414\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758.16891c32959f819a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"948c8261-f42c-4b17-a087-2800d05f3c5b\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-76c6c8f758\",\n                \"uid\": \"c15c62a1-08bd-4cee-ae90-8a0ef44ba8b8\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"395\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-76c6c8f758-gn5kz\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16891c328ed7a5db\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ed9ae815-c5b2-4cc1-b4ee-e67a1a55e287\",\n                \"resourceVersion\": \"49\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"a163a320-f1d0-4762-845c-ffa24ac2aad3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"223\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-76c6c8f758 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal.16891c2635648307\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d5286cf6-bb8e-497c-a48f-35e906425118\",\n                \"resourceVersion\": \"22\",\n                \"creationTimestamp\": \"2021-06-16T16:15:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"88a348369e7d4a47ee4b37c7442d9712\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal.16891c283a1445f9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c451ad40-9913-4d0b-b87a-ff9d1b4b4f55\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-06-16T16:15:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"88a348369e7d4a47ee4b37c7442d9712\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 8.66854869s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:59Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal.16891c287788d8df\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9ebaba38-c49f-46b3-81ff-554649a5cfe3\",\n                \"resourceVersion\": \"39\",\n                \"creationTimestamp\": \"2021-06-16T16:15:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"88a348369e7d4a47ee4b37c7442d9712\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal.16891c288033b596\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"28f95277-654d-4ba3-a6ec-0b66e64ffaa0\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-06-16T16:15:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"88a348369e7d4a47ee4b37c7442d9712\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal.16891c26363c9057\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c0ac722c-6097-42e7-affa-23f93a84a2a4\",\n                \"resourceVersion\": \"24\",\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"7a69b887c0307b3c160a98aedb1d44bd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal.16891c285c730bc2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"144af7d4-4761-4afb-92a3-c6a4ea79f383\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-06-16T16:15:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"7a69b887c0307b3c160a98aedb1d44bd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 9.231017632s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal.16891c287793c117\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c088d614-9525-424f-96d7-d7e6a38185ea\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2021-06-16T16:15:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"7a69b887c0307b3c160a98aedb1d44bd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal.16891c288078d8ec\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4cdc823a-a232-4dca-a9fe-241314267776\",\n                \"resourceVersion\": \"42\",\n                \"creationTimestamp\": \"2021-06-16T16:15:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"7a69b887c0307b3c160a98aedb1d44bd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16891c3cec22b39d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9f2b1a02-9694-4f6d-97b1-de2531f5d10b\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-06-16T16:16:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"71db8333-1d74-4f7d-9a24-66e1c331cd3d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"531\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-37-218_4f13ad1f-f04a-4879-ab86-e9e99514eaf2 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-37-218_4f13ad1f-f04a-4879-ab86-e9e99514eaf2\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:28Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-qtzrq.16891c3c86dbc3fa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"77134838-58e8-476a-96ee-1f3fcc244d0c\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-06-16T16:16:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-qtzrq\",\n                \"uid\": \"33e24b26-5adf-4de9-b4ae-5fe89e5ec085\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"520\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-qtzrq to ip-172-20-37-218.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-qtzrq.16891c3ca48fc659\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8aaf0c82-6bac-4f6e-9dc8-9133da11b46e\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-06-16T16:16:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-qtzrq\",\n                \"uid\": \"33e24b26-5adf-4de9-b4ae-5fe89e5ec085\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"521\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.21.0-beta.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:27Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-qtzrq.16891c3ca6c951c0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d495fd5d-5b8c-4d95-b025-5f8aa805acd2\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-06-16T16:16:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-qtzrq\",\n                \"uid\": \"33e24b26-5adf-4de9-b4ae-5fe89e5ec085\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"521\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:27Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-qtzrq.16891c3cabda4f94\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f7618f04-4b98-451b-aa9b-808465d1a33a\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-06-16T16:16:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-qtzrq\",\n                \"uid\": \"33e24b26-5adf-4de9-b4ae-5fe89e5ec085\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"521\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:27Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16891c3c86a419e3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1903474-1250-4c2f-809b-4ce79ae2674f\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-06-16T16:16:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"7e41d1c7-bbb2-466d-a4e9-1813ace7d6e3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"413\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-qtzrq\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal.16891c263378694a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cf009764-3449-461c-b945-e7123329a1ee\",\n                \"resourceVersion\": \"43\",\n                \"creationTimestamp\": \"2021-06-16T16:15:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:12Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal.16891c26389d6db0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a3d8c440-4613-4f7b-a807-516b180f8ab9\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:12Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal.16891c264395331b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"59eacc74-50f1-40fa-84ec-c5e99ec29ded\",\n                \"resourceVersion\": \"45\",\n                \"creationTimestamp\": \"2021-06-16T16:15:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:12Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal.16891c2643fb2a08\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1cfbe1f1-1df8-40d1-b9ed-c2a063cdfeda\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-06-16T16:15:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal.16891c2648ec367e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0af00faa-2061-4410-97bc-01f9246289e0\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-06-16T16:15:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal.16891c26643695f1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3321a0e2-2065-4cfc-a1e7-e8f4baad404e\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-06-16T16:15:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal.16891c263698d395\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ffee39f0-d349-418c-b409-e3c4e8bb043e\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"c2279bd798151f0625a3e9b9c56d097d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal.16891c263af6fe99\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"92787751-500c-4c61-a1ab-422fe052c600\",\n                \"resourceVersion\": \"29\",\n                \"creationTimestamp\": \"2021-06-16T16:15:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"c2279bd798151f0625a3e9b9c56d097d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal.16891c264e7326bd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3e147d6-b253-49e3-a03c-184e7d0e0e09\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-06-16T16:15:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"c2279bd798151f0625a3e9b9c56d097d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16891c2f30b385cb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"43a4726f-4ebb-4656-960b-03a806eeef33\",\n                \"resourceVersion\": \"9\",\n                \"creationTimestamp\": \"2021-06-16T16:15:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"48f276f1-96d5-42ea-a023-7f7521c81ad6\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"219\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-37-218_e5320c21-c582-4bd7-9bd9-8c78f25293e5 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c4812fad47e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3410a7b0-d427-4acc-b42f-ef30761b0d41\",\n                \"resourceVersion\": \"87\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"656\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-kg5tn to ip-172-20-58-250.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c488e1bbde3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f471904d-8087-48b8-a790-1e1fcd9370c4\",\n                \"resourceVersion\": \"218\",\n                \"creationTimestamp\": \"2021-06-16T16:17:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:18Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c4990d64088\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ac664352-69d4-4ff9-bce0-a796b64f41f0\",\n                \"resourceVersion\": \"220\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 4.340731323s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c49988f84a1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1f9866b7-f9e5-4307-b2f8-12473e322b27\",\n                \"resourceVersion\": \"222\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c49a0454e6e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"36980409-26e6-49be-bf43-e8021fd2952c\",\n                \"resourceVersion\": \"225\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c49aaf2708d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"176b52d8-3902-4d6b-916a-55d912367db1\",\n                \"resourceVersion\": \"227\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c49ad18b399\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"29396090-60ba-4396-a283-fb1501caa4ff\",\n                \"resourceVersion\": \"228\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn.16891c49b253a0f2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95b7da38-11a5-40ad-b18e-a9b4d99fef3b\",\n                \"resourceVersion\": \"229\",\n                \"creationTimestamp\": \"2021-06-16T16:17:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"658\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c32a002a8d4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a7a27fcc-670f-4f95-b5f2-5b81a93bb563\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"415\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-lxtlx to ip-172-20-37-218.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c396650d20d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"90a54b98-ed32-4c29-99f3-e8e950db49e8\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-06-16T16:16:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c3a66944ab7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7cc2382a-058f-47c5-a96e-5af1f05e7017\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-06-16T16:16:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 4.299378923s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:17Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c3a6fefaecf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"080cb794-a4af-492e-9ecf-c025dd00ce0d\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-06-16T16:16:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:17Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c3a7854a4eb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"31512afe-f3c5-4ebb-972c-2211a5eb565e\",\n                \"resourceVersion\": \"66\",\n                \"creationTimestamp\": \"2021-06-16T16:16:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c3a942a3266\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7fbb5ab8-e15b-46e4-bdfe-a8358249a68f\",\n                \"resourceVersion\": \"67\",\n                \"creationTimestamp\": \"2021-06-16T16:16:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c3a973dfe05\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d383e7cf-de0c-4cd9-a7b7-623a82379fce\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-06-16T16:16:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx.16891c3a9bc721b0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"36b1462c-5f08-45c9-b72e-0520c2b6d033\",\n                \"resourceVersion\": \"69\",\n                \"creationTimestamp\": \"2021-06-16T16:16:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c496a57a964\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1fb7059-9e8c-42cd-891c-2740ab990dd4\",\n                \"resourceVersion\": \"132\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"695\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-mchp6 to ip-172-20-62-139.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c49e30ae730\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b54746f3-e067-4a60-bfab-b73ceba8ab8e\",\n                \"resourceVersion\": \"197\",\n                \"creationTimestamp\": \"2021-06-16T16:17:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c4ae6ac1c28\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6f4bcda3-d2bd-4fef-9b54-369677913695\",\n                \"resourceVersion\": \"232\",\n                \"creationTimestamp\": \"2021-06-16T16:17:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 4.355849783s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:28Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c4aefa04122\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"72673f16-ba47-45ab-8d4d-79dcc6a922d7\",\n                \"resourceVersion\": \"233\",\n                \"creationTimestamp\": \"2021-06-16T16:17:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:28Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c4af7b4dbbb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cab1642e-5817-41c3-9e21-3236aa915ccb\",\n                \"resourceVersion\": \"234\",\n                \"creationTimestamp\": \"2021-06-16T16:17:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:28Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c4b04c244ef\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5c0e521e-a7a7-4588-8989-0f205d9ebbaa\",\n                \"resourceVersion\": \"237\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c4b071a4350\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9387dc80-e5d6-4929-83cf-239d579ff365\",\n                \"resourceVersion\": \"238\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6.16891c4b0c05f5ff\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"552da38b-b9b8-4a1f-bbca-732c9a66d787\",\n                \"resourceVersion\": \"240\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"697\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c47fbf71b2c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"edae2676-6a2b-4464-96e1-1a144102970c\",\n                \"resourceVersion\": \"85\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"642\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-qfz56 to ip-172-20-57-162.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c4879362ede\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"de470fa8-a225-48d6-bb83-a4f52d223457\",\n                \"resourceVersion\": \"135\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:18Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c498910c3d1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"78e1b04b-6cb6-47e9-b7db-ebf681643a53\",\n                \"resourceVersion\": \"138\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 4.560924531s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c499184cc9a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d84aeb92-41e5-480b-82db-5294a58f517b\",\n                \"resourceVersion\": \"140\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c4999d226d9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4789368e-3625-438a-92f3-b7eb8ae6add6\",\n                \"resourceVersion\": \"143\",\n                \"creationTimestamp\": \"2021-06-16T16:17:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c49d389bdf0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1c59127-2c21-4bf9-b7c1-87d44ca34cdd\",\n                \"resourceVersion\": \"173\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c49d6bcf60c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"534c100c-eb42-40fc-97fe-42008af19708\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56.16891c49dc9d18fa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"27ce9f40-097a-4ffe-a29e-8eb6a551d1f4\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"646\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c48d34f4cc4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e8895192-0732-4d34-a197-7d36102d9bc4\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2021-06-16T16:17:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"679\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-xktxj to ip-172-20-52-203.eu-west-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:19Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4953d6503d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c40bba0-b92c-4b06-bd8e-1fcc1923fe06\",\n                \"resourceVersion\": \"211\",\n                \"creationTimestamp\": \"2021-06-16T16:17:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:21Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4a5c7219af\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c9a1782f-f7e7-47b6-8ffa-6abfaa3ed1ea\",\n                \"resourceVersion\": \"213\",\n                \"creationTimestamp\": \"2021-06-16T16:17:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 4.439374528s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4a657438be\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9a1d1e4e-315a-4fc3-af0c-7de8c64c52ef\",\n                \"resourceVersion\": \"217\",\n                \"creationTimestamp\": \"2021-06-16T16:17:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4a6d13008e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"231bac6f-cc53-4017-9194-7823ebe5a520\",\n                \"resourceVersion\": \"219\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4a729f3a41\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1cfd5fd3-a287-438d-98ad-e31220c0f15d\",\n                \"resourceVersion\": \"221\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4a74dd65f9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"032de3ba-5374-45c2-99c4-d5b91e7cac01\",\n                \"resourceVersion\": \"224\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj.16891c4a7a985587\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"46df8bff-8850-4951-ad74-33f4c8505cda\",\n                \"resourceVersion\": \"226\",\n                \"creationTimestamp\": \"2021-06-16T16:17:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"681\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16891c329ad85434\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4830d6b9-c735-4ef9-a6a0-af6ae661842d\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"239\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-lxtlx\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16891c47fac8a238\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d2fead6a-52e8-480e-b703-9e6cf91c8800\",\n                \"resourceVersion\": \"83\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"503\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-qfz56\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16891c481271dba1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"48a308a1-1983-46f4-b406-0394f0934aca\",\n                \"resourceVersion\": \"86\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"648\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-kg5tn\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16891c48d2a958eb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ffc77870-5a1b-4c00-8d53-413031a9d245\",\n                \"resourceVersion\": \"105\",\n                \"creationTimestamp\": \"2021-06-16T16:17:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"659\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-xktxj\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:19Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16891c4969ad4c27\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0533ae79-67d8-4db7-a6f5-acdbc71036ba\",\n                \"resourceVersion\": \"130\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"683\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-mchp6\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"lastTimestamp\": \"2021-06-16T16:17:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal.16891c263145db12\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2284b473-0497-4ee8-8655-c0fa27db4d82\",\n                \"resourceVersion\": \"20\",\n                \"creationTimestamp\": \"2021-06-16T16:15:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"83157464d427c3175d51c8345b15bf41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal.16891c26384a3381\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3191271e-b5cb-48b1-b21d-71dfe1e03765\",\n                \"resourceVersion\": \"26\",\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"83157464d427c3175d51c8345b15bf41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal.16891c2643ca760b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f52c2b5a-62c7-4649-9bb3-7a1ea5e4128c\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-06-16T16:15:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"83157464d427c3175d51c8345b15bf41\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal.16891c42304adae5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"08620d6b-3530-4423-a92b-e7b06e1f61a9\",\n                \"resourceVersion\": \"178\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"uid\": \"2980361c0f60763142d714f0a98407f4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal.16891c42334fae10\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c996f52d-4c47-4799-a27d-c7efa954a36a\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"uid\": \"2980361c0f60763142d714f0a98407f4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal.16891c423914ca88\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2058d21f-80d8-451f-bf50-32f66c5d920a\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"uid\": \"2980361c0f60763142d714f0a98407f4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-52-203.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal.16891c414d40b855\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"959d6f2c-59da-4d05-81bd-5bc427fbf22c\",\n                \"resourceVersion\": \"108\",\n                \"creationTimestamp\": \"2021-06-16T16:17:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"uid\": \"cb7e35c358a635a1b30f77640ce20bbf\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal.16891c4150177c0b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e3b63912-eb36-4d77-b16d-37e7fde932c4\",\n                \"resourceVersion\": \"109\",\n                \"creationTimestamp\": \"2021-06-16T16:17:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"uid\": \"cb7e35c358a635a1b30f77640ce20bbf\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal.16891c4155f9a279\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9f3d056c-06f4-4905-ae78-0711d6c9365f\",\n                \"resourceVersion\": \"110\",\n                \"creationTimestamp\": \"2021-06-16T16:17:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"uid\": \"cb7e35c358a635a1b30f77640ce20bbf\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-57-162.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal.16891c4163dd426c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3059f2f9-3ed6-42d6-85dd-94a0d9af2c49\",\n                \"resourceVersion\": \"186\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"uid\": \"d33f0efdf98c6ca115c46288e0750066\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal.16891c4166ba7e88\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"838eaaa5-8811-45c4-a3c2-605f37045715\",\n                \"resourceVersion\": \"189\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"uid\": \"d33f0efdf98c6ca115c46288e0750066\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal.16891c416d17599f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a83a19c3-c0a8-4872-9ef3-7fa8647184d7\",\n                \"resourceVersion\": \"192\",\n                \"creationTimestamp\": \"2021-06-16T16:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"uid\": \"d33f0efdf98c6ca115c46288e0750066\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-58-250.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal.16891c42bfe6b187\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"449630b3-23b1-48ea-990f-b4166173f244\",\n                \"resourceVersion\": \"139\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"uid\": \"1bc1fbac25e963cd5dd5f0b8d6f0167f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:53Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal.16891c42c2a7dc18\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"71f9fe60-e51c-4e12-a546-fa0db395bacd\",\n                \"resourceVersion\": \"141\",\n                \"creationTimestamp\": \"2021-06-16T16:17:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"uid\": \"1bc1fbac25e963cd5dd5f0b8d6f0167f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:53Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal.16891c42ca729e2f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"77f207c2-d25c-49de-b4b9-a64eb6869839\",\n                \"resourceVersion\": \"153\",\n                \"creationTimestamp\": \"2021-06-16T16:17:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"uid\": \"1bc1fbac25e963cd5dd5f0b8d6f0167f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-62-139.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:16:53Z\",\n            \"lastTimestamp\": \"2021-06-16T16:16:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal.16891c2635f1bd17\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c2e64c2f-22f8-41e8-b25c-7fc641a3a916\",\n                \"resourceVersion\": \"23\",\n                \"creationTimestamp\": \"2021-06-16T16:15:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"c778eab5c11b7223d0171dbad8b6646f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal.16891c263af0be5e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"40ef2e3d-c044-499c-8710-cefca403c62d\",\n                \"resourceVersion\": \"28\",\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"c778eab5c11b7223d0171dbad8b6646f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal.16891c2654e242c8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"68a8edee-e493-4c76-9380-7b44b5468d33\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-06-16T16:15:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"uid\": \"c778eab5c11b7223d0171dbad8b6646f\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-218.eu-west-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"lastTimestamp\": \"2021-06-16T16:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16891c2f342dcc1e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b22b32e4-b39c-4d3c-ab54-427531c4514a\",\n                \"resourceVersion\": \"10\",\n                \"creationTimestamp\": \"2021-06-16T16:15:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"e6e19c5c-0675-4990-8547-cfe2506859ef\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"221\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-37-218_67a98b45-affa-4320-b577-90774db567b5 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-06-16T16:15:29Z\",\n            \"lastTimestamp\": \"2021-06-16T16:15:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1448\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1455\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e7ff688b-7547-4ea5-9a05-f001a69dc297\",\n                \"resourceVersion\": \"314\",\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1459\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7e41d1c7-bbb2-466d-a4e9-1813ace7d6e3\",\n                \"resourceVersion\": \"526\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-06-16T16:15:30Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.21.0-beta.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.21.0-beta.3\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.21.0-beta.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.3\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.3\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.21.0-beta.3\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.21.0-beta.3\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.3\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                \"resourceVersion\": \"769\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-06-16T16:15:30Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.flannel\",\n                    \"addon.kops.k8s.io/version\": \"0.13.0-kops.1\",\n                    \"app\": \"flannel\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-app\": \"flannel\",\n                    \"role.kubernetes.io/networking\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.flannel\\\",\\\"addon.kops.k8s.io/version\\\":\\\"0.13.0-kops.1\\\",\\\"app\\\":\\\"flannel\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-app\\\":\\\"flannel\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-flannel-ds\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"flannel\\\",\\\"tier\\\":\\\"node\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"flannel\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"nodeAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":{\\\"nodeSelectorTerms\\\":[{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"kubernetes.io/os\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"linux\\\"]}]}]}}},\\\"containers\\\":[{\\\"args\\\":[\\\"--ip-masq\\\",\\\"--kube-subnet-mgr\\\",\\\"--iptables-resync=5\\\"],\\\"command\\\":[\\\"/opt/bin/flanneld\\\"],\\\"env\\\":[{\\\"name\\\":\\\"POD_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.name\\\"}}},{\\\"name\\\":\\\"POD_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}}],\\\"image\\\":\\\"quay.io/coreos/flannel:v0.13.0\\\",\\\"name\\\":\\\"kube-flannel\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"100Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"100Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\",\\\"NET_RAW\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/flannel\\\",\\\"name\\\":\\\"run\\\"},{\\\"mountPath\\\":\\\"/dev/net\\\",\\\"name\\\":\\\"dev-net\\\"},{\\\"mountPath\\\":\\\"/etc/kube-flannel/\\\",\\\"name\\\":\\\"flannel-cfg\\\"}]}],\\\"hostNetwork\\\":true,\\\"initContainers\\\":[{\\\"args\\\":[\\\"-f\\\",\\\"/etc/kube-flannel/cni-conf.json\\\",\\\"/etc/cni/net.d/10-flannel.conflist\\\"],\\\"command\\\":[\\\"cp\\\"],\\\"image\\\":\\\"quay.io/coreos/flannel:v0.13.0\\\",\\\"name\\\":\\\"install-cni\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni\\\"},{\\\"mountPath\\\":\\\"/etc/kube-flannel/\\\",\\\"name\\\":\\\"flannel-cfg\\\"}]}],\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccountName\\\":\\\"flannel\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/flannel\\\"},\\\"name\\\":\\\"run\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/dev/net\\\"},\\\"name\\\":\\\"dev-net\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\"},\\\"name\\\":\\\"cni\\\"},{\\\"configMap\\\":{\\\"name\\\":\\\"kube-flannel-cfg\\\"},\\\"name\\\":\\\"flannel-cfg\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"flannel\",\n                        \"tier\": \"node\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"flannel\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"run\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/flannel\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"hostPath\": {\n                                    \"path\": \"/dev/net\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"cni\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"configMap\": {\n                                    \"name\": \"kube-flannel-cfg\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"initContainers\": [\n                            {\n                                \"name\": \"install-cni\",\n                                \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                                \"command\": [\n                                    \"cp\"\n                                ],\n                                \"args\": [\n                                    \"-f\",\n                                    \"/etc/kube-flannel/cni-conf.json\",\n                                    \"/etc/cni/net.d/10-flannel.conflist\"\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"flannel-cfg\",\n                                        \"mountPath\": \"/etc/kube-flannel/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-flannel\",\n                                \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                                \"command\": [\n                                    \"/opt/bin/flanneld\"\n                                ],\n                                \"args\": [\n                                    \"--ip-masq\",\n                                    \"--kube-subnet-mgr\",\n                                    \"--iptables-resync=5\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"POD_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.name\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"100Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"100Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"run\",\n                                        \"mountPath\": \"/run/flannel\"\n                                    },\n                                    {\n                                        \"name\": \"dev-net\",\n                                        \"mountPath\": \"/dev/net\"\n                                    },\n                                    {\n                                        \"name\": \"flannel-cfg\",\n                                        \"mountPath\": \"/etc/kube-flannel/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\",\n                                            \"NET_RAW\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"flannel\",\n                        \"serviceAccount\": \"flannel\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"nodeAffinity\": {\n                                \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                                    \"nodeSelectorTerms\": [\n                                        {\n                                            \"matchExpressions\": [\n                                                {\n                                                    \"key\": \"kubernetes.io/os\",\n                                                    \"operator\": \"In\",\n                                                    \"values\": [\n                                                        \"linux\"\n                                                    ]\n                                                }\n                                            ]\n                                        }\n                                    ]\n                                }\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 0\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 5,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 5,\n                \"numberReady\": 5,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 5,\n                \"numberAvailable\": 5\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1461\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6a72b885-f8eb-4dd7-89fb-bdb03dda727d\",\n                \"resourceVersion\": \"802\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"coredns/coredns:1.8.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"beta.kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"coredns/coredns:1.8.3\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-06-16T16:17:32Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:32Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-06-16T16:17:34Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:15:44Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-f45c4bf76\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7e20abdb-1622-43ff-9a14-1b0eb5c2a304\",\n                \"resourceVersion\": \"751\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-06-16T16:15:32Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-06-16T16:17:29Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-06-16T16:17:29Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:15:44Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-6f594f4c58\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a163a320-f1d0-4762-845c-ffa24ac2aad3\",\n                \"resourceVersion\": \"493\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-06-16T16:15:29Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.21.0-beta.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.21.0-beta.3\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.21.0-beta.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.3\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.3\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.21.0-beta.3\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-06-16T16:16:14Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:16:14Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-06-16T16:16:14Z\",\n                        \"lastTransitionTime\": \"2021-06-16T16:15:44Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-76c6c8f758\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1463\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"25def703-f6f8-44cd-be46-681181148c6f\",\n                \"resourceVersion\": \"750\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"6f594f4c58\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"7e20abdb-1622-43ff-9a14-1b0eb5c2a304\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"6f594f4c58\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"6f594f4c58\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5dccc43b-c547-4563-9b9d-1f2d22c48fe7\",\n                \"resourceVersion\": \"798\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"6a72b885-f8eb-4dd7-89fb-bdb03dda727d\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"f45c4bf76\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"f45c4bf76\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"coredns/coredns:1.8.3\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c15c62a1-08bd-4cee-ae90-8a0ef44ba8b8\",\n                \"resourceVersion\": \"492\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"76c6c8f758\",\n                    \"version\": \"v1.21.0-beta.3\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"a163a320-f1d0-4762-845c-ffa24ac2aad3\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"76c6c8f758\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"76c6c8f758\",\n                            \"version\": \"v1.21.0-beta.3\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1469\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-cds4r\",\n                \"generateName\": \"coredns-autoscaler-6f594f4c58-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"611ad708-344a-4a9f-9ba5-c3e0a042b01c\",\n                \"resourceVersion\": \"749\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"6f594f4c58\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-6f594f4c58\",\n                        \"uid\": \"25def703-f6f8-44cd-be46-681181148c6f\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-xmnrd\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-xmnrd\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.58.250\",\n                \"podIP\": \"100.96.2.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.2.2\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:26Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:29Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                        \"containerID\": \"docker://69f1a717021e73e0a1724690dc21c8d4fcffdae161a36afe10a5f053fb619cc7\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-45v5f\",\n                \"generateName\": \"coredns-f45c4bf76-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8ed76d88-825f-4f49-90e4-079f78e2318f\",\n                \"resourceVersion\": \"780\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-f45c4bf76\",\n                        \"uid\": \"5dccc43b-c547-4563-9b9d-1f2d22c48fe7\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-wx76c\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-wx76c\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:32Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:32Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.57.162\",\n                \"podIP\": \"100.96.1.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.2\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:26Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:30Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"imageID\": \"docker-pullable://coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                        \"containerID\": \"docker://7665ec66c3dbc925ce8a51c1fc81cc2ddb8b5873689e9093ed7bb134cbb74e40\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hf5wf\",\n                \"generateName\": \"coredns-f45c4bf76-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"987c7d49-1b9e-4048-a3ff-3cd241bcc6d6\",\n                \"resourceVersion\": \"797\",\n                \"creationTimestamp\": \"2021-06-16T16:17:29Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-f45c4bf76\",\n                        \"uid\": \"5dccc43b-c547-4563-9b9d-1f2d22c48fe7\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-2vpqn\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2vpqn\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:34Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:34Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.58.250\",\n                \"podIP\": \"100.96.2.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.2.3\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:29Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:33Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"imageID\": \"docker-pullable://coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                        \"containerID\": \"docker://fa4cfe883ac437870adaddf474afe21fe40ff5df3b15195370eb37fd99a31bf6\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-76c6c8f758-gn5kz\",\n                \"generateName\": \"dns-controller-76c6c8f758-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ccceda86-a9ea-4ce9-8d16-9e9c17aff5e3\",\n                \"resourceVersion\": \"491\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"76c6c8f758\",\n                    \"version\": \"v1.21.0-beta.3\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-76c6c8f758\",\n                        \"uid\": \"c15c62a1-08bd-4cee-ae90-8a0ef44ba8b8\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-tdppj\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-tdppj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:12Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:14Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:14Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:44Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:12Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:13Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.3\",\n                        \"imageID\": \"docker://sha256:bebf7534f80433bb59256899b08a5aa4c0b7207ba5a2fb5f89f6918765a441ab\",\n                        \"containerID\": \"docker://31c1d80044f6a0f7d46c2cb2bef96d02b9f9f07b61155bbc26be79457e8224f1\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f95b0ec9-1836-4bde-b660-de652bc3c928\",\n                \"resourceVersion\": \"513\",\n                \"creationTimestamp\": \"2021-06-16T16:16:14Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"88a348369e7d4a47ee4b37c7442d9712\",\n                    \"kubernetes.io/config.mirror\": \"88a348369e7d4a47ee4b37c7442d9712\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:14:33.947328704Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                        \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3997 --insecure=false --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:14:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:15:00Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"docker://8fcc0f4053d553050903de41887750934896fc620ace63d89f78b476d7fa8b20\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9f109656-c3ac-4b39-95cd-1acb1f8a4d70\",\n                \"resourceVersion\": \"514\",\n                \"creationTimestamp\": \"2021-06-16T16:16:23Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"7a69b887c0307b3c160a98aedb1d44bd\",\n                    \"kubernetes.io/config.mirror\": \"7a69b887c0307b3c160a98aedb1d44bd\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:14:33.947356809Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                        \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3996 --insecure=false --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:14:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:15:00Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"docker://b8ab8a1133f944ffc250d05653921b472dd6810aedfce767fd783e1a454f0823\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-qtzrq\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"33e24b26-5adf-4de9-b4ae-5fe89e5ec085\",\n                \"resourceVersion\": \"525\",\n                \"creationTimestamp\": \"2021-06-16T16:16:26Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"567cbd5fc\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.21.0-beta.3\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"7e41d1c7-bbb2-466d-a4e9-1813ace7d6e3\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-6ngbp\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.3\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-6ngbp\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-37-218.eu-west-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:26Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:27Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:27Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:26Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:26Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:27Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.3\",\n                        \"imageID\": \"docker://sha256:03e664f1a2b283bf0a6dec7c648ca64d0142e1d9bde86a4aa38cb6c0e569a04c\",\n                        \"containerID\": \"docker://fe3fe3428cb06e447377c8f77ae3795f25a4535fefa20e1cc97ecfe733220aba\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18d4c491-3c06-4cd9-8587-e51e84e3619f\",\n                \"resourceVersion\": \"569\",\n                \"creationTimestamp\": \"2021-06-16T16:16:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                    \"kubernetes.io/config.hash\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                    \"kubernetes.io/config.mirror\": \"82ff5ee49e1c7fab4ef4824ce5f23dbe\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:14:33.947358856Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                        \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/kube-apiserver\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.3\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:13Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:13Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:14:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:14:51Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.3\",\n                        \"imageID\": \"docker://sha256:8f1b6b9c8119e7ae9ef5bfd069a32ae146aa1805298486542f80a31207712a4e\",\n                        \"containerID\": \"docker://5da83b0991b1c458d12e6cbf397140998c662367c5c5e78adc62971547136294\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:15:12Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-06-16T16:14:51Z\",\n                                \"finishedAt\": \"2021-06-16T16:15:11Z\",\n                                \"containerID\": \"docker://fb0e0f82080f60bcffcf587287e515aea5037acaa782dc67293e30dda919abcf\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb\",\n                        \"containerID\": \"docker://980237ab91442c53e887743fa1f48a82a0b374d7ee92883d8b48b3698d7ccf59\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3b9dd4fd-81f0-4078-b9d0-b79ad6bd6997\",\n                \"resourceVersion\": \"515\",\n                \"creationTimestamp\": \"2021-06-16T16:16:19Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"c2279bd798151f0625a3e9b9c56d097d\",\n                    \"kubernetes.io/config.mirror\": \"c2279bd798151f0625a3e9b9c56d097d\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:14:33.947360524Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                        \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/ca.key\",\n                            \"--configure-cloud-routes=false\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/service-account.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10252,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:52Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:52Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:14:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:14:51Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:e16544fd47b02fea6201a1c39f0ffae170968b6dd48ac2643c4db3cab0011ed4\",\n                        \"containerID\": \"docker://272134961e17fff2e48861d214aab7009bc7ab9a080badcf721c5d66bc0b4d04\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-kg5tn\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7a669429-6ad6-4a43-95c2-fd00a01253a0\",\n                \"resourceVersion\": \"711\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-dnp8m\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-dnp8m\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-dnp8m\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-58-250.eu-west-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:23Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:24Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:24Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.58.250\",\n                \"podIP\": \"172.20.58.250\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.58.250\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:18Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-06-16T16:17:23Z\",\n                                \"finishedAt\": \"2021-06-16T16:17:23Z\",\n                                \"containerID\": \"docker://20cd273afe8146a28663e1a2e95df2ceb46069bc0383d74fa0c83ec447635918\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://20cd273afe8146a28663e1a2e95df2ceb46069bc0383d74fa0c83ec447635918\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:23Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://d7d41bdf130dcae5c670f8e13f79c12679d861db5ca9bfbc3a86cccb28cfb643\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-lxtlx\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"93b940f5-90f9-4bc9-be1c-8fa8daa6ba41\",\n                \"resourceVersion\": \"502\",\n                \"creationTimestamp\": \"2021-06-16T16:15:44Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-gc9gj\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-gc9gj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-gc9gj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-37-218.eu-west-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:18Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:19Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:19Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:15:44Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:12Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-06-16T16:16:18Z\",\n                                \"finishedAt\": \"2021-06-16T16:16:18Z\",\n                                \"containerID\": \"docker://06e6697da57b40063e3db0c296c697c0d59cf4fe995c778e646abfbc22f0b5fd\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://06e6697da57b40063e3db0c296c697c0d59cf4fe995c778e646abfbc22f0b5fd\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:18Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://fa18b783a46e27c79f72899ec5522823354bca6d4a29bf5e2287b548f317aaf7\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-mchp6\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"58632562-c539-4657-adcd-16495b1bb536\",\n                \"resourceVersion\": \"768\",\n                \"creationTimestamp\": \"2021-06-16T16:17:22Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-f9lb8\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-f9lb8\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-f9lb8\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-62-139.eu-west-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:29Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:30Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:30Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:22Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.62.139\",\n                \"podIP\": \"172.20.62.139\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.62.139\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:23Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-06-16T16:17:28Z\",\n                                \"finishedAt\": \"2021-06-16T16:17:28Z\",\n                                \"containerID\": \"docker://2bb31b8d31ccf8b05e84024a8bdc1a82b02677705557654e5cc9eb8211177d79\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://2bb31b8d31ccf8b05e84024a8bdc1a82b02677705557654e5cc9eb8211177d79\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:29Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://e6580c9dbc3f0f85d4ed915662ff106d8123e081e171d08e0ffa0965661d5864\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-qfz56\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2cfd9685-5599-4f46-9d38-33e793746dd8\",\n                \"resourceVersion\": \"718\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-5jqvx\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-5jqvx\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-5jqvx\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-57-162.eu-west-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:24Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:25Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:25Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:16Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.57.162\",\n                \"podIP\": \"172.20.57.162\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.57.162\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:17Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-06-16T16:17:23Z\",\n                                \"finishedAt\": \"2021-06-16T16:17:23Z\",\n                                \"containerID\": \"docker://5255a59a71536d727ac984b1223c884895d4b3dfe52bc02ffc250c2031db4380\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://5255a59a71536d727ac984b1223c884895d4b3dfe52bc02ffc250c2031db4380\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:24Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://7668dabf211260dfe693b9c16fe29c77bf91d3cc301ed0edeb74a7f42fb7ec71\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-xktxj\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7df3cbe6-6fe9-43d6-9d11-a6c7f4211e09\",\n                \"resourceVersion\": \"738\",\n                \"creationTimestamp\": \"2021-06-16T16:17:19Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"25ee1840-c0aa-4ca2-bee8-7cce7cd2b5e1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-l52fb\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-l52fb\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-l52fb\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-52-203.eu-west-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:26Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:27Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:27Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:17:19Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.52.203\",\n                \"podIP\": \"172.20.52.203\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.52.203\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:17:21Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-06-16T16:17:26Z\",\n                                \"finishedAt\": \"2021-06-16T16:17:26Z\",\n                                \"containerID\": \"docker://ecb749ab2a526e922becf75eeb927e7428e0e687c4da76f79c1da22dac74da0d\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://ecb749ab2a526e922becf75eeb927e7428e0e687c4da76f79c1da22dac74da0d\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:17:26Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://8962e24555205d26f9f25f8d163134a3b82408cb592603ca759f590bc5fd81d7\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1cc46c2f-fc70-414b-bdc3-0865c78770e4\",\n                \"resourceVersion\": \"713\",\n                \"creationTimestamp\": \"2021-06-16T16:17:16Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"83157464d427c3175d51c8345b15bf41\",\n                    \"kubernetes.io/config.mirror\": \"83157464d427c3175d51c8345b15bf41\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:14:33.947361988Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                        \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-37-218.eu-west-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://127.0.0.1\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:35Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:52Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:52Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:35Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:14:35Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:14:51Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://fcb177e1e7da6de2c88794b043c24f4fc799079a85cb5f321838ce07b480d290\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0bd96b2f-1baf-40b4-ae4c-544ff44fe488\",\n                \"resourceVersion\": \"902\",\n                \"creationTimestamp\": \"2021-06-16T16:18:01Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"2980361c0f60763142d714f0a98407f4\",\n                    \"kubernetes.io/config.mirror\": \"2980361c0f60763142d714f0a98407f4\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:16:49.300075228Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-52-203.eu-west-1.compute.internal\",\n                        \"uid\": \"84635025-912e-4be1-8cfc-71f4d38e0da1\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-52-203.eu-west-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-52-203.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:49Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:51Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:51Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:49Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.52.203\",\n                \"podIP\": \"172.20.52.203\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.52.203\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:49Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:51Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://b4f48517320f57249d8a1ae2988374d42a3aea6ee8e04b89f05a451a6a251abb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e481cff0-be57-4455-8c75-db12b08b62a4\",\n                \"resourceVersion\": \"919\",\n                \"creationTimestamp\": \"2021-06-16T16:18:10Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"cb7e35c358a635a1b30f77640ce20bbf\",\n                    \"kubernetes.io/config.mirror\": \"cb7e35c358a635a1b30f77640ce20bbf\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:16:45.678196103Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-57-162.eu-west-1.compute.internal\",\n                        \"uid\": \"04578755-f5c3-4c9b-8788-79a1117f5bd2\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-57-162.eu-west-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-57-162.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:46Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:47Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:47Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:46Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.57.162\",\n                \"podIP\": \"172.20.57.162\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.57.162\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:46Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:47Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://8f61a9ec246b9799873bb1b4788df47f9164cb38587d11ae8ba7356efe1e8b37\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5401af20-55cb-4631-8ad7-4b6524bd8410\",\n                \"resourceVersion\": \"889\",\n                \"creationTimestamp\": \"2021-06-16T16:17:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"d33f0efdf98c6ca115c46288e0750066\",\n                    \"kubernetes.io/config.mirror\": \"d33f0efdf98c6ca115c46288e0750066\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:16:46.062515021Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                        \"uid\": \"7300abdb-83fe-49b3-83a9-eb4c8a1fdeda\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-58-250.eu-west-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-58-250.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:46Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:48Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:48Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:46Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.58.250\",\n                \"podIP\": \"172.20.58.250\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.58.250\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:46Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:47Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://1bc3f98a6676324091c1bb044d6137f5480ba97f0eb7259a2cb7d7142d659262\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a51f451f-0aa0-4c46-b181-07274a9b4772\",\n                \"resourceVersion\": \"937\",\n                \"creationTimestamp\": \"2021-06-16T16:18:12Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"1bc1fbac25e963cd5dd5f0b8d6f0167f\",\n                    \"kubernetes.io/config.mirror\": \"1bc1fbac25e963cd5dd5f0b8d6f0167f\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:16:51.778807666Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-62-139.eu-west-1.compute.internal\",\n                        \"uid\": \"0c7ebb44-f33c-43ea-a856-11a5ed1b8f99\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-62-139.eu-west-1.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-62-139.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:54Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:54Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:16:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.62.139\",\n                \"podIP\": \"172.20.62.139\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.62.139\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:16:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:16:53Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://2e4e95d6929f96e065beea1fb39b8e351eda9e13b1edbeb9d5adad0f79063d03\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"43ea9934-7cbb-46ca-acab-a4acbc41794c\",\n                \"resourceVersion\": \"637\",\n                \"creationTimestamp\": \"2021-06-16T16:17:09Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"c778eab5c11b7223d0171dbad8b6646f\",\n                    \"kubernetes.io/config.mirror\": \"c778eab5c11b7223d0171dbad8b6646f\",\n                    \"kubernetes.io/config.seen\": \"2021-06-16T16:14:33.947363491Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                        \"uid\": \"0a98987f-4d9c-452e-a692-5a74200a8951\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-218.eu-west-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:35Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:53Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:53Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-06-16T16:14:35Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.218\",\n                \"podIP\": \"172.20.37.218\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.218\"\n                    }\n                ],\n                \"startTime\": \"2021-06-16T16:14:35Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-06-16T16:14:51Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:a4183b88f6e65972c4b176b43ca59de31868635a7e43805f4c6e26203de1742f\",\n                        \"containerID\": \"docker://9dc8e2c1cd4a8abf184d378671cc33c531a63ce37889ed0a0be4b16d0f3193fe\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-6f594f4c58-cds4r ====\nI0616 16:17:29.349653       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI0616 16:17:29.604180       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI0616 16:17:29.606133       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI0616 16:17:29.606151       1 plugin.go:50] Set control mode to linear\nI0616 16:17:29.606157       1 linear_controller.go:60] ConfigMap version change (old:  new: 753) - rebuilding params\nI0616 16:17:29.606162       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI0616 16:17:29.606215       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI0616 16:17:29.608379       1 k8sclient.go:272] Cluster status: SchedulableNodes[5], SchedulableCores[10]\nI0616 16:17:29.608392       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-6f594f4c58-cds4r ====\n==== START logs for container coredns of pod kube-system/coredns-f45c4bf76-45v5f ====\nW0616 16:17:30.575351       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0616 16:17:30.576198       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n.:53\n[INFO] plugin/reload: Running configuration MD5 = ce1e85197887ce49f3d78b19ce3dfa68\nCoreDNS-1.8.3\nlinux/amd64, go1.16, 4293992\n==== END logs for container coredns of pod kube-system/coredns-f45c4bf76-45v5f ====\n==== START logs for container coredns of pod kube-system/coredns-f45c4bf76-hf5wf ====\nW0616 16:17:33.060457       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0616 16:17:33.061543       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n.:53\n[INFO] plugin/reload: Running configuration MD5 = ce1e85197887ce49f3d78b19ce3dfa68\nCoreDNS-1.8.3\nlinux/amd64, go1.16, 4293992\n==== END logs for container coredns of pod kube-system/coredns-f45c4bf76-hf5wf ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-76c6c8f758-gn5kz ====\ndns-controller version 0.1\nI0616 16:16:13.825788       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI0616 16:16:13.825823       1 main.go:223] Ingress controller disabled\nI0616 16:16:13.825852       1 dnscontroller.go:108] starting DNS controller\nI0616 16:16:13.825870       1 node.go:60] starting node controller\nI0616 16:16:13.825877       1 dnscontroller.go:170] scope not yet ready: node\nI0616 16:16:13.825882       1 pod.go:60] starting pod controller\nI0616 16:16:13.826749       1 service.go:60] starting service controller\nI0616 16:16:13.860199       1 dnscontroller.go:625] Update desired state: node/ip-172-20-37-218.eu-west-1.compute.internal: [{A node/ip-172-20-37-218.eu-west-1.compute.internal/internal 172.20.37.218 true} {A node/ip-172-20-37-218.eu-west-1.compute.internal/external 34.243.197.33 true} {A node/role=master/internal 172.20.37.218 true} {A node/role=master/external 34.243.197.33 true} {A node/role=master/ ip-172-20-37-218.eu-west-1.compute.internal true} {A node/role=master/ ip-172-20-37-218.eu-west-1.compute.internal true} {A node/role=master/ ec2-34-243-197-33.eu-west-1.compute.amazonaws.com true}]\nI0616 16:16:18.826852       1 dnscache.go:74] querying all DNS zones (no cached results)\nI0616 16:16:26.950032       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-qtzrq: [{A kops-controller.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io. 172.20.37.218 false}]\nI0616 16:16:29.266195       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0616 16:16:29.266303       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0616 16:16:31.426985       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io.} [172.20.37.218]\nI0616 16:16:31.427021       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0616 16:16:34.437652       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal: [{_alias api.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io. node/ip-172-20-37-218.eu-west-1.compute.internal/external false}]\nI0616 16:16:36.648477       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0616 16:16:36.648521       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0616 16:16:38.337678       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io.} [34.243.197.33]\nI0616 16:16:38.337715       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0616 16:16:44.440993       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal: [{_alias api.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io. node/ip-172-20-37-218.eu-west-1.compute.internal/external false} {A api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io. 172.20.37.218 false}]\nI0616 16:16:48.512509       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0616 16:16:48.512562       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0616 16:16:50.704319       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io.} [172.20.37.218]\nI0616 16:16:50.704367       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0616 16:17:16.106902       1 dnscontroller.go:625] Update desired state: node/ip-172-20-57-162.eu-west-1.compute.internal: [{A node/ip-172-20-57-162.eu-west-1.compute.internal/internal 172.20.57.162 true} {A node/ip-172-20-57-162.eu-west-1.compute.internal/external 3.249.232.131 true} {A node/role=node/internal 172.20.57.162 true} {A node/role=node/external 3.249.232.131 true} {A node/role=node/ ip-172-20-57-162.eu-west-1.compute.internal true} {A node/role=node/ ip-172-20-57-162.eu-west-1.compute.internal true} {A node/role=node/ ec2-3-249-232-131.eu-west-1.compute.amazonaws.com true}]\nI0616 16:17:16.499821       1 dnscontroller.go:625] Update desired state: node/ip-172-20-58-250.eu-west-1.compute.internal: [{A node/ip-172-20-58-250.eu-west-1.compute.internal/internal 172.20.58.250 true} {A node/ip-172-20-58-250.eu-west-1.compute.internal/external 3.249.4.52 true} {A node/role=node/internal 172.20.58.250 true} {A node/role=node/external 3.249.4.52 true} {A node/role=node/ ip-172-20-58-250.eu-west-1.compute.internal true} {A node/role=node/ ip-172-20-58-250.eu-west-1.compute.internal true} {A node/role=node/ ec2-3-249-4-52.eu-west-1.compute.amazonaws.com true}]\nI0616 16:17:19.730971       1 dnscontroller.go:625] Update desired state: node/ip-172-20-52-203.eu-west-1.compute.internal: [{A node/ip-172-20-52-203.eu-west-1.compute.internal/internal 172.20.52.203 true} {A node/ip-172-20-52-203.eu-west-1.compute.internal/external 52.51.66.205 true} {A node/role=node/internal 172.20.52.203 true} {A node/role=node/external 52.51.66.205 true} {A node/role=node/ ip-172-20-52-203.eu-west-1.compute.internal true} {A node/role=node/ ip-172-20-52-203.eu-west-1.compute.internal true} {A node/role=node/ ec2-52-51-66-205.eu-west-1.compute.amazonaws.com true}]\nI0616 16:17:22.261386       1 dnscontroller.go:625] Update desired state: node/ip-172-20-62-139.eu-west-1.compute.internal: [{A node/ip-172-20-62-139.eu-west-1.compute.internal/internal 172.20.62.139 true} {A node/ip-172-20-62-139.eu-west-1.compute.internal/external 34.245.105.52 true} {A node/role=node/internal 172.20.62.139 true} {A node/role=node/external 34.245.105.52 true} {A node/role=node/ ip-172-20-62-139.eu-west-1.compute.internal true} {A node/role=node/ ip-172-20-62-139.eu-west-1.compute.internal true} {A node/role=node/ ec2-34-245-105-52.eu-west-1.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-76c6c8f758-gn5kz ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal ====\netcd-manager\nI0616 16:15:00.932206    6002 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0616 16:15:00.934825    6002 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0616 16:15:00.935558    6002 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0616 16:15:00.936216    6002 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0616 16:15:00.936829    6002 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0616 16:15:00.937465    6002 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI0616 16:15:00.939752    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:15:01.232409    6002 mounter.go:304] Trying to mount master volume: \"vol-06a8412b25833dd44\"\nI0616 16:15:01.232429    6002 volumes.go:331] Trying to attach volume \"vol-06a8412b25833dd44\" at \"/dev/xvdu\"\nI0616 16:15:01.232609    6002 volumes.go:86] AWS API Request: ec2/AttachVolume\nW0616 16:15:01.511963    6002 volumes.go:343] Invalid value '/dev/xvdu' for unixDevice. Attachment point /dev/xvdu is already in use\nI0616 16:15:01.511980    6002 volumes.go:331] Trying to attach volume \"vol-06a8412b25833dd44\" at \"/dev/xvdv\"\nI0616 16:15:01.512120    6002 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0616 16:15:02.084974    6002 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-06-16 16:15:01.972 +0000 UTC,\n  Device: \"/dev/xvdv\",\n  InstanceId: \"i-0a4ab1b5032c4d5e7\",\n  State: \"attaching\",\n  VolumeId: \"vol-06a8412b25833dd44\"\n}\nI0616 16:15:02.085197    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:15:02.315385    6002 mounter.go:318] Currently attached volumes: [0xc0000ac100]\nI0616 16:15:02.315406    6002 mounter.go:72] Master volume \"vol-06a8412b25833dd44\" is attached at \"/dev/xvdv\"\nI0616 16:15:02.315423    6002 mounter.go:86] Doing safe-format-and-mount of /dev/xvdv to /mnt/master-vol-06a8412b25833dd44\nI0616 16:15:02.315437    6002 volumes.go:234] volume vol-06a8412b25833dd44 not mounted at /rootfs/dev/xvdv\nI0616 16:15:02.315597    6002 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol06a8412b25833dd44\"\nI0616 16:15:02.315609    6002 volumes.go:251] volume vol-06a8412b25833dd44 not mounted at nvme-Amazon_Elastic_Block_Store_vol06a8412b25833dd44\nI0616 16:15:02.315615    6002 mounter.go:121] Waiting for volume \"vol-06a8412b25833dd44\" to be mounted\nI0616 16:15:03.315720    6002 volumes.go:234] volume vol-06a8412b25833dd44 not mounted at /rootfs/dev/xvdv\nI0616 16:15:03.315950    6002 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol06a8412b25833dd44\" at \"/dev/nvme2n1\"\nI0616 16:15:03.316044    6002 mounter.go:125] Found volume \"vol-06a8412b25833dd44\" mounted at device \"/dev/nvme2n1\"\nI0616 16:15:03.316967    6002 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-06a8412b25833dd44\"\nI0616 16:15:03.317203    6002 mounter.go:176] Mounting device \"/dev/nvme2n1\" on \"/mnt/master-vol-06a8412b25833dd44\"\nI0616 16:15:03.317298    6002 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0616 16:15:03.317390    6002 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0616 16:15:03.338546    6002 mount_linux.go:449] Output: \"\"\nI0616 16:15:03.338572    6002 mount_linux.go:408] Disk \"/dev/nvme2n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme2n1]\nI0616 16:15:03.338591    6002 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme2n1]\nI0616 16:15:03.570934    6002 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme2n1 /mnt/master-vol-06a8412b25833dd44\nI0616 16:15:03.570953    6002 mount_linux.go:436] Attempting to mount disk /dev/nvme2n1 in ext4 format at /mnt/master-vol-06a8412b25833dd44\nI0616 16:15:03.570970    6002 nsenter.go:80] nsenter mount /dev/nvme2n1 /mnt/master-vol-06a8412b25833dd44 ext4 [defaults]\nI0616 16:15:03.570993    6002 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-06a8412b25833dd44 --scope -- /bin/mount -t ext4 -o defaults /dev/nvme2n1 /mnt/master-vol-06a8412b25833dd44]\nI0616 16:15:03.600878    6002 nsenter.go:84] Output of mounting /dev/nvme2n1 to /mnt/master-vol-06a8412b25833dd44: Running scope as unit: run-r3214611e11ce406c86478edd809c972e.scope\nI0616 16:15:03.600903    6002 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0616 16:15:03.600925    6002 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0616 16:15:03.617082    6002 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme2n1\\nTYPE=ext4\\n\"\nI0616 16:15:03.617104    6002 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme2n1\nI0616 16:15:03.617117    6002 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme2n1]\nI0616 16:15:03.644454    6002 resizefs_linux.go:68] Device /dev/nvme2n1 resized successfully\nI0616 16:15:03.657469    6002 mount_linux.go:206] Detected OS with systemd\nI0616 16:15:03.658368    6002 mounter.go:262] device \"/dev/nvme2n1\" did not evaluate as a symlink: lstat /dev/nvme2n1: no such file or directory\nI0616 16:15:03.658391    6002 mounter.go:262] device \"/dev/nvme2n1\" did not evaluate as a symlink: lstat /dev/nvme2n1: no such file or directory\nI0616 16:15:03.658396    6002 mounter.go:242] matched device \"/dev/nvme2n1\" and \"/dev/nvme2n1\" via '\\x00'\nI0616 16:15:03.658405    6002 mounter.go:94] mounted master volume \"vol-06a8412b25833dd44\" on /mnt/master-vol-06a8412b25833dd44\nI0616 16:15:03.658419    6002 main.go:320] discovered IP address: 172.20.37.218\nI0616 16:15:03.658424    6002 main.go:325] Setting data dir to /rootfs/mnt/master-vol-06a8412b25833dd44\nI0616 16:15:03.769329    6002 certs.go:183] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI0616 16:15:03.955976    6002 certs.go:183] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI0616 16:15:03.960890    6002 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI0616 16:15:03.961389    6002 main.go:474] peerClientIPs: [172.20.37.218]\nI0616 16:15:04.063822    6002 certs.go:183] generating certificate for \"etcd-manager-etcd-events-a\"\nI0616 16:15:04.065704    6002 server.go:105] GRPC server listening on \"172.20.37.218:3997\"\nI0616 16:15:04.066344    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:15:04.261886    6002 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:15:04.303568    6002 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.37.218 0} {172.20.37.218 0}]\nI0616 16:15:04.303614    6002 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:04.303793    6002 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI0616 16:15:06.066699    6002 controller.go:189] starting controller iteration\nI0616 16:15:06.067081    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:06.067239    6002 commands.go:41] refreshing commands\nI0616 16:15:06.067339    6002 s3context.go:334] product_uuid is \"ec2fb2a9-268e-6659-a969-6455e6509bc7\", assuming running on EC2\nI0616 16:15:06.068667    6002 s3context.go:166] got region from metadata: \"eu-west-1\"\nI0616 16:15:06.091072    6002 s3context.go:213] found bucket in region \"us-west-1\"\nI0616 16:15:06.755146    6002 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0616 16:15:06.755173    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0616 16:15:16.913394    6002 controller.go:189] starting controller iteration\nI0616 16:15:16.913482    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:16.913853    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:16.914026    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:16.915320    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > }\nI0616 16:15:16.915399    6002 controller.go:303] etcd cluster members: map[]\nI0616 16:15:16.915412    6002 controller.go:641] sending member map to all peers: \nI0616 16:15:16.916256    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:16.916276    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:17.490171    6002 controller.go:359] detected that there is no existing cluster\nI0616 16:15:17.490188    6002 commands.go:41] refreshing commands\nI0616 16:15:17.709727    6002 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0616 16:15:17.709749    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0616 16:15:17.859006    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:17.859268    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:17.859282    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:17.859339    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:17.859422    6002 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > }]\nI0616 16:15:17.859720    6002 newcluster.go:153] JoinClusterResponse: \nI0616 16:15:17.861090    6002 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0616 16:15:17.861182    6002 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\nI0616 16:15:17.861641    6002 pki.go:59] adding peerClientIPs [172.20.37.218]\nI0616 16:15:17.861676    6002 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[172.20.37.218 127.0.0.1]} Usages:[2 1]}\nI0616 16:15:18.055115    6002 certs.go:183] generating certificate for \"etcd-events-a\"\nI0616 16:15:18.058215    6002 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0616 16:15:18.327517    6002 certs.go:183] generating certificate for \"etcd-events-a\"\nI0616 16:15:18.416295    6002 certs.go:183] generating certificate for \"etcd-events-a\"\nI0616 16:15:18.419338    6002 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0616 16:15:18.425266    6002 newcluster.go:171] JoinClusterResponse: \n2021-06-16 16:15:18.426853 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\n2021-06-16 16:15:18.426958 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.crt\n2021-06-16 16:15:18.427022 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:18.427098 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\n2021-06-16 16:15:18.427154 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-06-16 16:15:18.427240 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\n2021-06-16 16:15:18.427328 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\n2021-06-16 16:15:18.427424 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-06-16 16:15:18.427512 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=-UIPY-sC50daSlFKDie08w\n2021-06-16 16:15:18.427587 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.key\n2021-06-16 16:15:18.427675 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-06-16 16:15:18.427746 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-06-16 16:15:18.427835 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-06-16 16:15:18.427909 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-06-16 16:15:18.428009 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-06-16 16:15:18.428097 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.crt\n2021-06-16 16:15:18.428180 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:18.428250 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.key\n2021-06-16 16:15:18.428325 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/ca.crt\n2021-06-16 16:15:18.428442 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/ca.crt\n2021-06-16 16:15:18.428532 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.428Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.428Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.crt, key = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.429Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.430Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"-UIPY-sC50daSlFKDie08w\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\nI0616 16:15:18.430358    6002 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0616 16:15:18.430399    6002 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.436Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w/member/snap/db\",\"took\":\"3.366929ms\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.437Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.37.218:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.437Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.37.218:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.443Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"cluster-id\":\"94f1ed6479b0caf6\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.443Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"92c5ca7a6315aeb1 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.443Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"92c5ca7a6315aeb1 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.443Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 92c5ca7a6315aeb1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.443Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"92c5ca7a6315aeb1 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.443Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"92c5ca7a6315aeb1 switched to configuration voters=(10576081926946664113)\"}\n{\"level\":\"warn\",\"ts\":\"2021-06-16T16:15:18.445Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.448Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.451Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.452Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.452Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"92c5ca7a6315aeb1 switched to configuration voters=(10576081926946664113)\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.452Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"94f1ed6479b0caf6\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"added-peer-id\":\"92c5ca7a6315aeb1\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.456Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.crt, key = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.456Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.456Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\nI0616 16:15:18.759782    6002 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:18.943892    6002 controller.go:189] starting controller iteration\nI0616 16:15:18.943918    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:18.944220    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:18.944368    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:18.945170    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995]\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.143Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"92c5ca7a6315aeb1 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.143Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"92c5ca7a6315aeb1 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.143Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"92c5ca7a6315aeb1 received MsgVoteResp from 92c5ca7a6315aeb1 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.143Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"92c5ca7a6315aeb1 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.143Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 92c5ca7a6315aeb1 elected leader 92c5ca7a6315aeb1 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.144Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/92c5ca7a6315aeb1/attributes\",\"cluster-id\":\"94f1ed6479b0caf6\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.144Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.144Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"94f1ed6479b0caf6\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.145Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.145Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.145Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\nI0616 16:15:19.161987    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0616 16:15:19.162136    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:15:19.162467    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:19.162789    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:19.162810    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:19.162944    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:19.163128    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:19.163144    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:19.313647    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:19.314282    6002 backup.go:134] performing snapshot save to /tmp/544839903/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.320Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.320Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.321Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.324Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.325Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0616 16:15:19.325506    6002 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/2021-06-16T16:15:19Z-000001/etcd.backup.gz\"\nI0616 16:15:19.501860    6002 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/2021-06-16T16:15:19Z-000001/_etcd_backup.meta\"\nI0616 16:15:19.664815    6002 backup.go:159] backup complete: name:\"2021-06-16T16:15:19Z-000001\" \nI0616 16:15:19.665406    6002 controller.go:937] backup response: name:\"2021-06-16T16:15:19Z-000001\" \nI0616 16:15:19.665423    6002 controller.go:576] took backup: name:\"2021-06-16T16:15:19Z-000001\" \nI0616 16:15:19.826874    6002 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events: [2021-06-16T16:15:19Z-000001]\nI0616 16:15:19.826899    6002 cleanup.go:166] retaining backup \"2021-06-16T16:15:19Z-000001\"\nI0616 16:15:19.826922    6002 restore.go:98] Setting quarantined state to false\nI0616 16:15:19.827206    6002 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" cluster_name:\"etcd-events\" > \nI0616 16:15:19.827243    6002 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" cluster_name:\"etcd-events\" > \nI0616 16:15:19.827252    6002 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\nI0616 16:15:19.828417    6002 etcdprocess.go:131] Waiting for etcd to exit\nI0616 16:15:19.928678    6002 etcdprocess.go:131] Waiting for etcd to exit\nI0616 16:15:19.928698    6002 etcdprocess.go:136] Exited etcd: signal: killed\nI0616 16:15:19.928785    6002 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0616 16:15:19.928938    6002 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0616 16:15:19.928962    6002 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0616 16:15:19.928987    6002 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\nI0616 16:15:19.929080    6002 pki.go:59] adding peerClientIPs [172.20.37.218]\nI0616 16:15:19.929100    6002 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[172.20.37.218 127.0.0.1]} Usages:[2 1]}\nI0616 16:15:19.929447    6002 certs.go:122] existing certificate not valid after 2023-06-16T16:15:18Z; will regenerate\nI0616 16:15:19.929455    6002 certs.go:183] generating certificate for \"etcd-events-a\"\nI0616 16:15:19.932993    6002 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0616 16:15:19.933202    6002 certs.go:122] existing certificate not valid after 2023-06-16T16:15:18Z; will regenerate\nI0616 16:15:19.933209    6002 certs.go:183] generating certificate for \"etcd-events-a\"\nI0616 16:15:20.308788    6002 certs.go:183] generating certificate for \"etcd-events-a\"\nI0616 16:15:20.310727    6002 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0616 16:15:20.311191    6002 restore.go:116] ReconfigureResponse: \nI0616 16:15:20.312516    6002 controller.go:189] starting controller iteration\nI0616 16:15:20.312591    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:20.312861    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:20.313024    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:20.313773    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\n2021-06-16 16:15:20.321288 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\n2021-06-16 16:15:20.321321 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.crt\n2021-06-16 16:15:20.321328 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:20.321353 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\n2021-06-16 16:15:20.321374 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-06-16 16:15:20.321405 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\n2021-06-16 16:15:20.321437 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\n2021-06-16 16:15:20.321443 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-06-16 16:15:20.321450 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=-UIPY-sC50daSlFKDie08w\n2021-06-16 16:15:20.321456 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.key\n2021-06-16 16:15:20.321470 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4002\n2021-06-16 16:15:20.321489 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-06-16 16:15:20.321517 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-06-16 16:15:20.321527 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-06-16 16:15:20.321543 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-06-16 16:15:20.321551 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.crt\n2021-06-16 16:15:20.321591 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:20.321599 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.key\n2021-06-16 16:15:20.321604 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/ca.crt\n2021-06-16 16:15:20.321622 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/ca.crt\n2021-06-16 16:15:20.321643 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.321Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.321Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.321Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.crt, key = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.322Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.322Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.322Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-06a8412b25833dd44/data/-UIPY-sC50daSlFKDie08w/member/snap/db\",\"took\":\"187.774µs\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.323Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"94f1ed6479b0caf6\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.324Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"92c5ca7a6315aeb1 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.324Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"92c5ca7a6315aeb1 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.324Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 92c5ca7a6315aeb1 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-06-16T16:15:20.325Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.327Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.328Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.328Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.328Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"92c5ca7a6315aeb1 switched to configuration voters=(10576081926946664113)\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.328Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"94f1ed6479b0caf6\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"added-peer-id\":\"92c5ca7a6315aeb1\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.329Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"94f1ed6479b0caf6\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.329Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.332Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.crt, key = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-06a8412b25833dd44/pki/-UIPY-sC50daSlFKDie08w/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.332Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.332Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.724Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"92c5ca7a6315aeb1 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.724Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"92c5ca7a6315aeb1 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.724Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"92c5ca7a6315aeb1 received MsgVoteResp from 92c5ca7a6315aeb1 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.724Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"92c5ca7a6315aeb1 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.724Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 92c5ca7a6315aeb1 elected leader 92c5ca7a6315aeb1 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.725Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"92c5ca7a6315aeb1\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]}\",\"request-path\":\"/0/members/92c5ca7a6315aeb1/attributes\",\"cluster-id\":\"94f1ed6479b0caf6\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.727Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\nI0616 16:15:21.743400    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:21.743514    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:15:21.743534    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:21.743733    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:21.743745    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:21.743796    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:21.743864    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:21.743874    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:21.892582    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:21.892667    6002 controller.go:557] controller loop complete\nI0616 16:15:31.897433    6002 controller.go:189] starting controller iteration\nI0616 16:15:31.897474    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:31.897714    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:31.897849    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:31.898532    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:15:31.938510    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:31.938619    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:15:31.938637    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:31.938875    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:31.938890    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:31.938969    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:31.939070    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:31.939089    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:32.520169    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:32.520244    6002 controller.go:557] controller loop complete\nI0616 16:15:42.521499    6002 controller.go:189] starting controller iteration\nI0616 16:15:42.521524    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:42.521862    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:42.522056    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:42.523031    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:15:42.536005    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:42.536101    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:15:42.536120    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:42.536554    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:42.536573    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:42.536703    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:42.536827    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:42.536878    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:43.109589    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:43.109678    6002 controller.go:557] controller loop complete\nI0616 16:15:53.110903    6002 controller.go:189] starting controller iteration\nI0616 16:15:53.110927    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:53.111119    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:15:53.111333    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:15:53.111925    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:15:53.124190    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:53.124370    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:15:53.124503    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:53.124750    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:53.124766    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:53.124819    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:53.125042    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:53.125057    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:15:53.694224    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:53.694385    6002 controller.go:557] controller loop complete\nI0616 16:16:03.697857    6002 controller.go:189] starting controller iteration\nI0616 16:16:03.697884    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:03.698083    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:16:03.698208    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:03.707629    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:16:03.722220    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:03.722306    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:16:03.722324    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:03.722522    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:03.722536    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:03.722588    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:03.722674    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:03.722685    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:16:04.299356    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:04.299519    6002 controller.go:557] controller loop complete\nI0616 16:16:04.307572    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:16:04.432048    6002 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:16:04.477570    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:04.477652    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:14.300675    6002 controller.go:189] starting controller iteration\nI0616 16:16:14.300702    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:14.300885    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:16:14.300994    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:14.301373    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:16:14.320847    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:14.321024    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:16:14.321042    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:14.321302    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:14.321328    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:14.321364    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:14.321430    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:14.321439    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:16:14.897618    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:14.897691    6002 controller.go:557] controller loop complete\nI0616 16:16:24.899583    6002 controller.go:189] starting controller iteration\nI0616 16:16:24.899617    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:24.899995    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:16:24.900189    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:24.900712    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:16:24.928077    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:24.928582    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:16:24.928756    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:24.929103    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:24.930449    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:24.930711    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:24.930970    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:24.931086    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:16:25.508591    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:25.508827    6002 controller.go:557] controller loop complete\nI0616 16:16:35.510591    6002 controller.go:189] starting controller iteration\nI0616 16:16:35.510622    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:35.511037    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:16:35.511171    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:35.511857    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:16:35.529429    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:35.529530    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:16:35.529545    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:35.529740    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:35.529752    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:35.529801    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:35.529871    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:35.529882    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:16:36.098214    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:36.098309    6002 controller.go:557] controller loop complete\nI0616 16:16:46.100233    6002 controller.go:189] starting controller iteration\nI0616 16:16:46.100463    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:46.100767    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:16:46.100987    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:46.101502    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:16:46.116606    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:46.116701    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:16:46.116719    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:46.117046    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:46.117066    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:46.117144    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:46.117302    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:46.117319    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:16:46.691641    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:46.691715    6002 controller.go:557] controller loop complete\nI0616 16:16:56.693200    6002 controller.go:189] starting controller iteration\nI0616 16:16:56.693229    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:56.693643    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:16:56.693892    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:16:56.694435    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:16:56.706343    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:56.706446    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:16:56.706483    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:56.706787    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:56.706808    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:56.706885    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:56.707023    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:56.707039    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:16:57.285127    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:57.285203    6002 controller.go:557] controller loop complete\nI0616 16:17:04.478863    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:17:04.609175    6002 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:17:04.656036    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:04.656115    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:07.287074    6002 controller.go:189] starting controller iteration\nI0616 16:17:07.287105    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:07.287396    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:17:07.287542    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:07.288566    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:17:07.304878    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:07.304961    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:17:07.304981    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:07.305194    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:07.305209    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:07.305267    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:07.305358    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:07.305372    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:17:07.887540    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:07.887610    6002 controller.go:557] controller loop complete\nI0616 16:17:17.889328    6002 controller.go:189] starting controller iteration\nI0616 16:17:17.889361    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:17.889692    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:17:17.889883    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:17.890784    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:17:17.908554    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:17.908653    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:17:17.908673    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:17.908966    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:17.908988    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:17.909132    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:17.909337    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:17.909354    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:17:18.489432    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:18.489505    6002 controller.go:557] controller loop complete\nI0616 16:17:28.490701    6002 controller.go:189] starting controller iteration\nI0616 16:17:28.490732    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:28.491079    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:17:28.491215    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:28.492279    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:17:28.507913    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:28.508000    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:17:28.508019    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:28.508312    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:28.508331    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:28.508420    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:28.508533    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:28.508551    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:17:29.084114    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:29.084881    6002 controller.go:557] controller loop complete\nI0616 16:17:39.086398    6002 controller.go:189] starting controller iteration\nI0616 16:17:39.086433    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:39.086659    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:17:39.086777    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:39.087220    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:17:39.102487    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:39.102629    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:17:39.102650    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:39.102902    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:39.102918    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:39.102974    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:39.103069    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:39.103084    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:17:39.673136    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:39.673208    6002 controller.go:557] controller loop complete\nI0616 16:17:49.674571    6002 controller.go:189] starting controller iteration\nI0616 16:17:49.674601    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:49.675152    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:17:49.675280    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:17:49.676502    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:17:49.692453    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:49.692725    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:17:49.692755    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:49.693075    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:49.693095    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:49.693164    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:49.693281    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:49.693297    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:17:50.271598    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:50.271784    6002 controller.go:557] controller loop complete\nI0616 16:18:00.273129    6002 controller.go:189] starting controller iteration\nI0616 16:18:00.273163    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:00.273600    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:18:00.273867    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:00.274395    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:18:00.286464    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:00.286671    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:18:00.286745    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:00.286975    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:00.286995    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:00.287166    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:00.287330    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:00.287375    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:18:00.852504    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:00.852716    6002 controller.go:557] controller loop complete\nI0616 16:18:04.656815    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:18:04.764696    6002 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:18:04.818817    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:04.818891    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:10.854356    6002 controller.go:189] starting controller iteration\nI0616 16:18:10.854561    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:10.854936    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:18:10.855261    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:10.855780    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:18:10.868048    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:10.868137    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:18:10.868156    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:10.868644    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:10.868730    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:10.868853    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:10.869012    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:10.869057    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:18:11.445922    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:11.446171    6002 controller.go:557] controller loop complete\nI0616 16:18:21.448340    6002 controller.go:189] starting controller iteration\nI0616 16:18:21.448399    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:21.448835    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:18:21.449108    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:21.449863    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:18:21.473640    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:21.473739    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:18:21.473758    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:21.473967    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:21.473981    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:21.474032    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:21.474110    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:21.474121    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:18:22.049221    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:22.049436    6002 controller.go:557] controller loop complete\nI0616 16:18:32.051051    6002 controller.go:189] starting controller iteration\nI0616 16:18:32.051084    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:32.051485    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:18:32.051671    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:32.052060    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:18:32.066471    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:32.066560    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:18:32.066725    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:32.067019    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:32.067039    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:32.067191    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:32.067333    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:32.067418    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:18:32.647376    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:32.647452    6002 controller.go:557] controller loop complete\nI0616 16:18:42.648882    6002 controller.go:189] starting controller iteration\nI0616 16:18:42.648911    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:42.649359    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:18:42.649653    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:42.650728    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:18:42.665054    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:42.665141    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:18:42.665178    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:42.665472    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:42.665490    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:42.665567    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:42.665708    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:42.665725    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:18:43.244021    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:43.244194    6002 controller.go:557] controller loop complete\nI0616 16:18:53.245823    6002 controller.go:189] starting controller iteration\nI0616 16:18:53.245854    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:53.246302    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:18:53.246554    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:18:53.247110    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:18:53.259109    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:53.259253    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:18:53.259292    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:53.259481    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:53.259521    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:53.259620    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:53.259784    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:53.259801    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:18:53.837450    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:53.837528    6002 controller.go:557] controller loop complete\nI0616 16:19:03.839531    6002 controller.go:189] starting controller iteration\nI0616 16:19:03.839566    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:03.840089    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:19:03.840321    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:03.841462    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:19:03.857800    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:03.857890    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:19:03.857911    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:03.858296    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:03.858313    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:03.858368    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:03.858451    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:03.858463    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:19:04.428837    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:04.428905    6002 controller.go:557] controller loop complete\nI0616 16:19:04.819328    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:19:04.928172    6002 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:19:05.000446    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:05.000627    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:14.430293    6002 controller.go:189] starting controller iteration\nI0616 16:19:14.430322    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:14.430597    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:19:14.430738    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:14.431099    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:19:14.445413    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:14.445516    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:19:14.445532    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:14.445728    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:14.445742    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:14.445784    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:14.445850    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:14.445859    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:19:15.030784    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:15.030877    6002 controller.go:557] controller loop complete\nI0616 16:19:25.032715    6002 controller.go:189] starting controller iteration\nI0616 16:19:25.032921    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:25.033226    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:19:25.033397    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:25.033775    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:19:25.045975    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:25.046274    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:19:25.046371    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:25.046615    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:25.046635    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:25.046798    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:25.047112    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:25.047215    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:19:25.625495    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:25.625683    6002 controller.go:557] controller loop complete\nI0616 16:19:35.626901    6002 controller.go:189] starting controller iteration\nI0616 16:19:35.626951    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:35.627404    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:19:35.627556    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:35.628286    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:19:35.644102    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:35.644183    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:19:35.644203    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:35.644441    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:35.644456    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:35.644511    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:35.644596    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:35.644608    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:19:36.221641    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:36.221715    6002 controller.go:557] controller loop complete\nI0616 16:19:46.222859    6002 controller.go:189] starting controller iteration\nI0616 16:19:46.222891    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:46.223343    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:19:46.223534    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:46.223888    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:19:46.235609    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:46.235697    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:19:46.235716    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:46.236053    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:46.236070    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:46.236151    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:46.236288    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:46.236304    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:19:46.808914    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:46.809141    6002 controller.go:557] controller loop complete\nI0616 16:19:56.810861    6002 controller.go:189] starting controller iteration\nI0616 16:19:56.811051    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:56.811366    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:19:56.811565    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:19:56.812056    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:19:56.824178    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:56.824269    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:19:56.824289    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:56.824527    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:56.824542    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:56.824619    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:56.824726    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:56.824741    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:19:57.393821    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:57.393897    6002 controller.go:557] controller loop complete\nI0616 16:20:05.001219    6002 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:20:05.117968    6002 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:20:05.165810    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:05.165973    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:07.396088    6002 controller.go:189] starting controller iteration\nI0616 16:20:07.396168    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:07.396520    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:20:07.396755    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:07.397399    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:20:07.409790    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:07.409896    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:20:07.409919    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:07.410130    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:07.410151    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:07.410232    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:07.410355    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:07.410371    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:20:07.987618    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:07.987690    6002 controller.go:557] controller loop complete\nI0616 16:20:17.989508    6002 controller.go:189] starting controller iteration\nI0616 16:20:17.989538    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:17.989830    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:20:17.990083    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:17.993525    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:20:18.006297    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:18.006381    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:20:18.006542    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:18.006789    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:18.006808    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:18.007003    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:18.007137    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:18.007155    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:20:18.595741    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:18.595828    6002 controller.go:557] controller loop complete\nI0616 16:20:28.597538    6002 controller.go:189] starting controller iteration\nI0616 16:20:28.597732    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:28.598015    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:20:28.598265    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:28.598753    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:20:28.611552    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:28.611646    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:20:28.611668    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:28.611994    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:28.612078    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:28.612225    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:28.612374    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:28.612525    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:20:29.191946    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:29.192019    6002 controller.go:557] controller loop complete\nI0616 16:20:39.193505    6002 controller.go:189] starting controller iteration\nI0616 16:20:39.193677    6002 controller.go:266] Broadcasting leadership assertion with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:39.193952    6002 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > leadership_token:\"s4DpK5GzrquYzrevaoLxpA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" > > \nI0616 16:20:39.194176    6002 controller.go:295] I am leader with token \"s4DpK5GzrquYzrevaoLxpA\"\nI0616 16:20:39.195288    6002 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002]\nI0616 16:20:39.209542    6002 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.218:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"-UIPY-sC50daSlFKDie08w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:39.209627    6002 controller.go:303] etcd cluster members: map[10576081926946664113:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"10576081926946664113\"}]\nI0616 16:20:39.209646    6002 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:39.210013    6002 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:39.210032    6002 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-events-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:39.210089    6002 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:39.210176    6002 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:39.210188    6002 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0616 16:20:39.781919    6002 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:39.782003    6002 controller.go:557] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-37-218.eu-west-1.compute.internal ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal ====\netcd-manager\nI0616 16:15:00.933949    5995 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0616 16:15:00.935290    5995 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0616 16:15:00.936082    5995 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0616 16:15:00.937160    5995 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0616 16:15:00.937715    5995 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0616 16:15:00.938503    5995 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/main k8s.io/role/master=1 kubernetes.io/cluster/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/main\nI0616 16:15:00.940631    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:15:01.101690    5995 mounter.go:304] Trying to mount master volume: \"vol-0b744b77092fa434b\"\nI0616 16:15:01.101709    5995 volumes.go:331] Trying to attach volume \"vol-0b744b77092fa434b\" at \"/dev/xvdu\"\nI0616 16:15:01.101870    5995 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0616 16:15:01.605501    5995 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-06-16 16:15:01.593 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-0a4ab1b5032c4d5e7\",\n  State: \"attaching\",\n  VolumeId: \"vol-0b744b77092fa434b\"\n}\nI0616 16:15:01.605667    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:15:01.705158    5995 mounter.go:318] Currently attached volumes: [0xc000102000]\nI0616 16:15:01.705178    5995 mounter.go:72] Master volume \"vol-0b744b77092fa434b\" is attached at \"/dev/xvdu\"\nI0616 16:15:01.705690    5995 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0b744b77092fa434b\nI0616 16:15:01.705715    5995 volumes.go:234] volume vol-0b744b77092fa434b not mounted at /rootfs/dev/xvdu\nI0616 16:15:01.705815    5995 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0b744b77092fa434b\"\nI0616 16:15:01.705828    5995 volumes.go:251] volume vol-0b744b77092fa434b not mounted at nvme-Amazon_Elastic_Block_Store_vol0b744b77092fa434b\nI0616 16:15:01.705833    5995 mounter.go:121] Waiting for volume \"vol-0b744b77092fa434b\" to be mounted\nI0616 16:15:02.705944    5995 volumes.go:234] volume vol-0b744b77092fa434b not mounted at /rootfs/dev/xvdu\nI0616 16:15:02.706007    5995 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0b744b77092fa434b\" at \"/dev/nvme1n1\"\nI0616 16:15:02.706018    5995 mounter.go:125] Found volume \"vol-0b744b77092fa434b\" mounted at device \"/dev/nvme1n1\"\nI0616 16:15:02.706717    5995 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0b744b77092fa434b\"\nI0616 16:15:02.706808    5995 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-0b744b77092fa434b\"\nI0616 16:15:02.706820    5995 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0616 16:15:02.706845    5995 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0616 16:15:02.731014    5995 mount_linux.go:449] Output: \"\"\nI0616 16:15:02.731040    5995 mount_linux.go:408] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI0616 16:15:02.731063    5995 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI0616 16:15:02.991296    5995 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-0b744b77092fa434b\nI0616 16:15:02.991315    5995 mount_linux.go:436] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-0b744b77092fa434b\nI0616 16:15:02.991331    5995 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-0b744b77092fa434b ext4 [defaults]\nI0616 16:15:02.991361    5995 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0b744b77092fa434b --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-0b744b77092fa434b]\nI0616 16:15:03.011264    5995 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-0b744b77092fa434b: Running scope as unit: run-rd96f565c70884fedbdb79b3b12e849e9.scope\nI0616 16:15:03.011288    5995 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0616 16:15:03.011313    5995 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0616 16:15:03.028337    5995 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI0616 16:15:03.028372    5995 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI0616 16:15:03.028409    5995 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI0616 16:15:03.031209    5995 resizefs_linux.go:68] Device /dev/nvme1n1 resized successfully\nI0616 16:15:03.042654    5995 mount_linux.go:206] Detected OS with systemd\nI0616 16:15:03.043275    5995 mounter.go:262] device \"/dev/nvme1n1\" did not evaluate as a symlink: lstat /dev/nvme1n1: no such file or directory\nI0616 16:15:03.043301    5995 mounter.go:262] device \"/dev/nvme1n1\" did not evaluate as a symlink: lstat /dev/nvme1n1: no such file or directory\nI0616 16:15:03.043308    5995 mounter.go:242] matched device \"/dev/nvme1n1\" and \"/dev/nvme1n1\" via '\\x00'\nI0616 16:15:03.043319    5995 mounter.go:94] mounted master volume \"vol-0b744b77092fa434b\" on /mnt/master-vol-0b744b77092fa434b\nI0616 16:15:03.043335    5995 main.go:320] discovered IP address: 172.20.37.218\nI0616 16:15:03.043340    5995 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0b744b77092fa434b\nI0616 16:15:03.158627    5995 certs.go:183] generating certificate for \"etcd-manager-server-etcd-a\"\nI0616 16:15:03.306346    5995 certs.go:183] generating certificate for \"etcd-manager-client-etcd-a\"\nI0616 16:15:03.310623    5995 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-a\"\nI0616 16:15:03.310981    5995 main.go:474] peerClientIPs: [172.20.37.218]\nI0616 16:15:03.729709    5995 certs.go:183] generating certificate for \"etcd-manager-etcd-a\"\nI0616 16:15:03.740055    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:15:03.739700    5995 server.go:105] GRPC server listening on \"172.20.37.218:3996\"\nI0616 16:15:03.907913    5995 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:15:03.961202    5995 peers.go:115] found new candidate peer from discovery: etcd-a [{172.20.37.218 0} {172.20.37.218 0}]\nI0616 16:15:03.961349    5995 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:03.961619    5995 peers.go:295] connecting to peer \"etcd-a\" with TLS policy, servername=\"etcd-manager-server-etcd-a\"\nI0616 16:15:05.740449    5995 controller.go:189] starting controller iteration\nI0616 16:15:05.740925    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:05.741110    5995 commands.go:41] refreshing commands\nI0616 16:15:05.741228    5995 s3context.go:334] product_uuid is \"ec2fb2a9-268e-6659-a969-6455e6509bc7\", assuming running on EC2\nI0616 16:15:05.742504    5995 s3context.go:166] got region from metadata: \"eu-west-1\"\nI0616 16:15:05.765112    5995 s3context.go:213] found bucket in region \"us-west-1\"\nI0616 16:15:06.433149    5995 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0616 16:15:06.433170    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0616 16:15:16.589028    5995 controller.go:189] starting controller iteration\nI0616 16:15:16.589059    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:16.589409    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:16.589583    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:16.590090    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > }\nI0616 16:15:16.590168    5995 controller.go:303] etcd cluster members: map[]\nI0616 16:15:16.590180    5995 controller.go:641] sending member map to all peers: \nI0616 16:15:16.590481    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:16.590498    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:17.406953    5995 controller.go:359] detected that there is no existing cluster\nI0616 16:15:17.406970    5995 commands.go:41] refreshing commands\nI0616 16:15:17.628981    5995 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0616 16:15:17.629004    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0616 16:15:17.777786    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:17.778101    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:17.778158    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:17.778265    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:17.778415    5995 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > }]\nI0616 16:15:17.778817    5995 newcluster.go:153] JoinClusterResponse: \nI0616 16:15:17.779724    5995 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0616 16:15:17.779776    5995 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\nI0616 16:15:17.780150    5995 pki.go:59] adding peerClientIPs [172.20.37.218]\nI0616 16:15:17.780175    5995 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[172.20.37.218 127.0.0.1]} Usages:[2 1]}\nI0616 16:15:18.077345    5995 certs.go:183] generating certificate for \"etcd-a\"\nI0616 16:15:18.080357    5995 pki.go:110] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0616 16:15:18.345868    5995 certs.go:183] generating certificate for \"etcd-a\"\nI0616 16:15:18.466696    5995 certs.go:183] generating certificate for \"etcd-a\"\nI0616 16:15:18.468838    5995 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0616 16:15:18.469543    5995 newcluster.go:171] JoinClusterResponse: \nI0616 16:15:18.469609    5995 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0616 16:15:18.469646    5995 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-06-16 16:15:18.476067 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\n2021-06-16 16:15:18.476096 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.crt\n2021-06-16 16:15:18.476160 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:18.476174 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\n2021-06-16 16:15:18.476277 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-06-16 16:15:18.476331 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\n2021-06-16 16:15:18.476338 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\n2021-06-16 16:15:18.476343 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-06-16 16:15:18.476361 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=TslPsX61VjKTlDqpujM1aw\n2021-06-16 16:15:18.476451 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.key\n2021-06-16 16:15:18.476494 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3994\n2021-06-16 16:15:18.476506 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-06-16 16:15:18.476512 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-06-16 16:15:18.476535 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-06-16 16:15:18.476589 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-06-16 16:15:18.476602 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.crt\n2021-06-16 16:15:18.476641 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:18.476649 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.key\n2021-06-16 16:15:18.476658 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/ca.crt\n2021-06-16 16:15:18.476737 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/ca.crt\n2021-06-16 16:15:18.476776 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.476Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.477Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.crt, key = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.477Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3994\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.477Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-a=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"TslPsX61VjKTlDqpujM1aw\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.481Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw/member/snap/db\",\"took\":\"3.53874ms\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.482Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.37.218:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.482Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.37.218:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.496Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"3c919e269ebfcd16\",\"cluster-id\":\"8602d2862dd3bdfb\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.496Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"3c919e269ebfcd16 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.496Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"3c919e269ebfcd16 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.496Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 3c919e269ebfcd16 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.496Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"3c919e269ebfcd16 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.496Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"3c919e269ebfcd16 switched to configuration voters=(4364443402608037142)\"}\n{\"level\":\"warn\",\"ts\":\"2021-06-16T16:15:18.499Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.503Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.505Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"3c919e269ebfcd16\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.507Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"3c919e269ebfcd16\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.507Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"3c919e269ebfcd16 switched to configuration voters=(4364443402608037142)\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.507Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"8602d2862dd3bdfb\",\"local-member-id\":\"3c919e269ebfcd16\",\"added-peer-id\":\"3c919e269ebfcd16\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.508Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.crt, key = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.508Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"3c919e269ebfcd16\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.508Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"3c919e269ebfcd16 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"3c919e269ebfcd16 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"3c919e269ebfcd16 received MsgVoteResp from 3c919e269ebfcd16 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"3c919e269ebfcd16 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 3c919e269ebfcd16 elected leader 3c919e269ebfcd16 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"3c919e269ebfcd16\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994]}\",\"request-path\":\"/0/members/3c919e269ebfcd16/attributes\",\"cluster-id\":\"8602d2862dd3bdfb\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.597Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.598Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3994\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.599Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"8602d2862dd3bdfb\",\"local-member-id\":\"3c919e269ebfcd16\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.599Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:18.599Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\nI0616 16:15:18.792949    5995 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:18.958594    5995 controller.go:189] starting controller iteration\nI0616 16:15:18.958618    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:18.958917    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:18.959155    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:18.960210    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994]\nI0616 16:15:18.974805    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0616 16:15:18.975069    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:15:18.975089    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:18.975407    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:18.975426    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:18.975550    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:18.975675    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:18.975692    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:19.127655    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:19.128677    5995 backup.go:134] performing snapshot save to /tmp/917494350/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.134Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.134Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.135Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.135Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.135Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0616 16:15:19.136700    5995 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/2021-06-16T16:15:19Z-000001/etcd.backup.gz\"\nI0616 16:15:19.311213    5995 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/2021-06-16T16:15:19Z-000001/_etcd_backup.meta\"\nI0616 16:15:19.486313    5995 backup.go:159] backup complete: name:\"2021-06-16T16:15:19Z-000001\" \nI0616 16:15:19.486969    5995 controller.go:937] backup response: name:\"2021-06-16T16:15:19Z-000001\" \nI0616 16:15:19.487017    5995 controller.go:576] took backup: name:\"2021-06-16T16:15:19Z-000001\" \nI0616 16:15:19.647164    5995 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main: [2021-06-16T16:15:19Z-000001]\nI0616 16:15:19.647188    5995 cleanup.go:166] retaining backup \"2021-06-16T16:15:19Z-000001\"\nI0616 16:15:19.647213    5995 restore.go:98] Setting quarantined state to false\nI0616 16:15:19.647538    5995 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" cluster_name:\"etcd\" > \nI0616 16:15:19.647594    5995 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" cluster_name:\"etcd\" > \nI0616 16:15:19.647609    5995 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\nI0616 16:15:19.648572    5995 etcdprocess.go:131] Waiting for etcd to exit\nI0616 16:15:19.748807    5995 etcdprocess.go:131] Waiting for etcd to exit\nI0616 16:15:19.748825    5995 etcdprocess.go:136] Exited etcd: signal: killed\nI0616 16:15:19.748895    5995 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0616 16:15:19.749050    5995 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0616 16:15:19.749066    5995 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0616 16:15:19.749100    5995 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\nI0616 16:15:19.749200    5995 pki.go:59] adding peerClientIPs [172.20.37.218]\nI0616 16:15:19.749221    5995 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[172.20.37.218 127.0.0.1]} Usages:[2 1]}\nI0616 16:15:19.749481    5995 certs.go:122] existing certificate not valid after 2023-06-16T16:15:18Z; will regenerate\nI0616 16:15:19.749495    5995 certs.go:183] generating certificate for \"etcd-a\"\nI0616 16:15:19.751594    5995 pki.go:110] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0616 16:15:19.751779    5995 certs.go:122] existing certificate not valid after 2023-06-16T16:15:18Z; will regenerate\nI0616 16:15:19.751792    5995 certs.go:183] generating certificate for \"etcd-a\"\nI0616 16:15:19.977865    5995 certs.go:183] generating certificate for \"etcd-a\"\nI0616 16:15:19.980920    5995 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0616 16:15:19.987001    5995 restore.go:116] ReconfigureResponse: \nI0616 16:15:19.988622    5995 controller.go:189] starting controller iteration\nI0616 16:15:19.988645    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:19.988853    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:19.988955    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:19.989305    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\n2021-06-16 16:15:19.997967 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\n2021-06-16 16:15:19.997998 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.crt\n2021-06-16 16:15:19.998005 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:19.998014 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\n2021-06-16 16:15:19.998036 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-06-16 16:15:19.998063 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\n2021-06-16 16:15:19.998068 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\n2021-06-16 16:15:19.998072 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-06-16 16:15:19.998078 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=TslPsX61VjKTlDqpujM1aw\n2021-06-16 16:15:19.998083 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.key\n2021-06-16 16:15:19.998093 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4001\n2021-06-16 16:15:19.998100 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-06-16 16:15:19.998106 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-06-16 16:15:19.998119 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-06-16 16:15:19.998128 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-06-16 16:15:19.998134 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.crt\n2021-06-16 16:15:19.998139 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-06-16 16:15:19.998145 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.key\n2021-06-16 16:15:19.998152 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/ca.crt\n2021-06-16 16:15:19.998165 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/ca.crt\n2021-06-16 16:15:19.998181 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.998Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.998Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.998Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.crt, key = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.998Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4001\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.999Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:19.999Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0b744b77092fa434b/data/TslPsX61VjKTlDqpujM1aw/member/snap/db\",\"took\":\"146.467µs\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.004Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"8602d2862dd3bdfb\",\"local-member-id\":\"3c919e269ebfcd16\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.005Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"3c919e269ebfcd16 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.005Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"3c919e269ebfcd16 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.005Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 3c919e269ebfcd16 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-06-16T16:15:20.006Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.008Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.009Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"3c919e269ebfcd16\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.010Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.010Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"3c919e269ebfcd16 switched to configuration voters=(4364443402608037142)\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.010Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"8602d2862dd3bdfb\",\"local-member-id\":\"3c919e269ebfcd16\",\"added-peer-id\":\"3c919e269ebfcd16\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.015Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.crt, key = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0b744b77092fa434b/pki/TslPsX61VjKTlDqpujM1aw/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.015Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"3c919e269ebfcd16\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.015Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.016Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"8602d2862dd3bdfb\",\"local-member-id\":\"3c919e269ebfcd16\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:20.016Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.705Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"3c919e269ebfcd16 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.705Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"3c919e269ebfcd16 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.705Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"3c919e269ebfcd16 received MsgVoteResp from 3c919e269ebfcd16 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.705Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"3c919e269ebfcd16 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.705Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 3c919e269ebfcd16 elected leader 3c919e269ebfcd16 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.708Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"3c919e269ebfcd16\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]}\",\"request-path\":\"/0/members/3c919e269ebfcd16/attributes\",\"cluster-id\":\"8602d2862dd3bdfb\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-06-16T16:15:21.709Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4001\"}\nI0616 16:15:21.725184    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:21.725922    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:15:21.725952    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:21.726325    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:21.726344    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:21.726387    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:21.726468    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:21.726478    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:21.875337    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:21.875411    5995 controller.go:557] controller loop complete\nI0616 16:15:31.877092    5995 controller.go:189] starting controller iteration\nI0616 16:15:31.877128    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:31.877405    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:31.877538    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:31.878630    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:15:31.912696    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:31.913557    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:15:31.913627    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:31.913852    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:31.913882    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:31.913957    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:31.914042    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:31.914065    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:32.491927    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:32.492006    5995 controller.go:557] controller loop complete\nI0616 16:15:42.493513    5995 controller.go:189] starting controller iteration\nI0616 16:15:42.493544    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:42.493887    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:42.494091    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:42.494633    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:15:42.509201    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:42.509324    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:15:42.509342    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:42.509694    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:42.509714    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:42.509861    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:42.510030    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:42.510115    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:43.084886    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:43.085051    5995 controller.go:557] controller loop complete\nI0616 16:15:53.086250    5995 controller.go:189] starting controller iteration\nI0616 16:15:53.086280    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:53.086695    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:15:53.086819    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:15:53.087820    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:15:53.109727    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:15:53.109911    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:15:53.110003    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:15:53.110270    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:53.110292    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:15:53.110378    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:15:53.110580    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:15:53.110597    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:15:53.686222    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:15:53.686318    5995 controller.go:557] controller loop complete\nI0616 16:16:03.687997    5995 controller.go:189] starting controller iteration\nI0616 16:16:03.688090    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:03.688725    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:16:03.688898    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:03.689343    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:16:03.711752    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:03.711867    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:16:03.711886    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:03.712080    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:03.712093    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:03.712149    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:03.712225    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:03.712236    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:16:03.966003    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:16:04.105742    5995 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:16:04.150973    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:04.151188    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:04.295162    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:04.295358    5995 controller.go:557] controller loop complete\nI0616 16:16:14.297185    5995 controller.go:189] starting controller iteration\nI0616 16:16:14.297217    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:14.297485    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:16:14.297664    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:14.298152    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:16:14.317332    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:14.317455    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:16:14.317471    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:14.317704    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:14.317717    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:14.317796    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:14.317939    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:14.317952    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:16:14.898594    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:14.898703    5995 controller.go:557] controller loop complete\nI0616 16:16:24.900630    5995 controller.go:189] starting controller iteration\nI0616 16:16:24.900656    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:24.900826    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:16:24.900931    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:24.901699    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:16:24.929517    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:24.929637    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:16:24.929654    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:24.929865    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:24.929879    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:24.929948    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:24.930038    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:24.930051    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:16:25.509713    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:25.509866    5995 controller.go:557] controller loop complete\nI0616 16:16:35.511744    5995 controller.go:189] starting controller iteration\nI0616 16:16:35.511767    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:35.512087    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:16:35.512212    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:35.512563    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:16:35.542375    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:35.542473    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:16:35.542490    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:35.542688    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:35.542708    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:35.542761    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:35.542836    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:35.542853    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:16:36.121898    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:36.121994    5995 controller.go:557] controller loop complete\nI0616 16:16:46.123176    5995 controller.go:189] starting controller iteration\nI0616 16:16:46.123201    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:46.123492    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:16:46.123687    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:46.126322    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:16:46.141855    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:46.141940    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:16:46.141956    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:46.142314    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:46.142332    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:46.142400    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:46.142533    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:46.142548    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:16:46.726388    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:46.726587    5995 controller.go:557] controller loop complete\nI0616 16:16:56.728521    5995 controller.go:189] starting controller iteration\nI0616 16:16:56.728633    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:56.728894    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:16:56.729076    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:16:56.729568    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:16:56.741080    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:16:56.741163    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:16:56.741179    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:16:56.741473    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:56.741491    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:16:56.741557    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:16:56.741666    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:16:56.741683    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:16:57.320944    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:16:57.321164    5995 controller.go:557] controller loop complete\nI0616 16:17:04.152123    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:17:04.262525    5995 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:17:04.310262    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:04.310624    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:07.322348    5995 controller.go:189] starting controller iteration\nI0616 16:17:07.322441    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:07.322728    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:17:07.322968    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:07.323476    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:17:07.334970    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:07.335085    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:17:07.335105    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:07.335361    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:07.335381    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:07.335527    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:07.335647    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:07.335663    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:17:07.911636    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:07.911726    5995 controller.go:557] controller loop complete\nI0616 16:17:17.912897    5995 controller.go:189] starting controller iteration\nI0616 16:17:17.912922    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:17.913249    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:17:17.913439    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:17.914098    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:17:17.929019    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:17.929094    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:17:17.929111    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:17.929494    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:17.929584    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:17.929685    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:17.929843    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:17.929929    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:17:18.517599    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:18.517781    5995 controller.go:557] controller loop complete\nI0616 16:17:28.519769    5995 controller.go:189] starting controller iteration\nI0616 16:17:28.519796    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:28.520096    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:17:28.520289    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:28.520765    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:17:28.532489    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:28.532579    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:17:28.532742    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:28.532996    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:28.533013    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:28.533183    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:28.533331    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:28.533404    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:17:29.193811    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:29.193904    5995 controller.go:557] controller loop complete\nI0616 16:17:39.195768    5995 controller.go:189] starting controller iteration\nI0616 16:17:39.195801    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:39.196236    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:17:39.196493    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:39.197200    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:17:39.210940    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:39.211039    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:17:39.211058    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:39.212726    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:39.212747    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:39.212802    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:39.212879    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:39.212891    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:17:39.786001    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:39.786073    5995 controller.go:557] controller loop complete\nI0616 16:17:49.787611    5995 controller.go:189] starting controller iteration\nI0616 16:17:49.787639    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:49.787863    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:17:49.787971    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:17:49.788297    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:17:49.800333    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:17:49.800599    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:17:49.800625    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:17:49.800857    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:49.800903    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:17:49.801007    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:17:49.801135    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:17:49.801186    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:17:50.376618    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:17:50.376830    5995 controller.go:557] controller loop complete\nI0616 16:18:00.378016    5995 controller.go:189] starting controller iteration\nI0616 16:18:00.378046    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:00.378471    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:18:00.378725    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:00.379340    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:18:00.400015    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:00.400363    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:18:00.400904    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:00.401571    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:00.401589    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:00.401653    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:00.401757    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:00.401772    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:18:00.973176    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:00.973248    5995 controller.go:557] controller loop complete\nI0616 16:18:04.311013    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:18:04.425386    5995 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:18:04.471257    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:04.471348    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:10.974887    5995 controller.go:189] starting controller iteration\nI0616 16:18:10.975087    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:10.975424    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:18:10.975670    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:10.976147    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:18:10.987832    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:10.987920    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:18:10.987940    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:10.988306    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:10.988325    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:10.988459    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:10.988572    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:10.988589    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:18:11.570459    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:11.570657    5995 controller.go:557] controller loop complete\nI0616 16:18:21.571972    5995 controller.go:189] starting controller iteration\nI0616 16:18:21.572002    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:21.572292    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:18:21.572479    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:21.573223    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:18:21.587556    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:21.587641    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:18:21.587831    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:21.588067    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:21.588084    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:21.588178    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:21.588294    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:21.588355    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:18:22.163126    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:22.163198    5995 controller.go:557] controller loop complete\nI0616 16:18:32.164961    5995 controller.go:189] starting controller iteration\nI0616 16:18:32.165112    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:32.165388    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:18:32.165605    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:32.166099    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:18:32.177789    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:32.177905    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:18:32.177978    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:32.178266    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:32.178287    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:32.178380    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:32.178536    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:32.178551    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:18:32.752099    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:32.752168    5995 controller.go:557] controller loop complete\nI0616 16:18:42.753498    5995 controller.go:189] starting controller iteration\nI0616 16:18:42.753528    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:42.753956    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:18:42.754087    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:42.754794    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:18:42.769639    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:42.770022    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:18:42.770055    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:42.770366    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:42.770416    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:42.770535    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:42.770673    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:42.770715    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:18:43.352700    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:43.352771    5995 controller.go:557] controller loop complete\nI0616 16:18:53.354614    5995 controller.go:189] starting controller iteration\nI0616 16:18:53.354644    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:53.355029    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:18:53.355254    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:18:53.355724    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:18:53.368152    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:18:53.368279    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:18:53.368359    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:18:53.368607    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:53.368628    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:18:53.368708    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:18:53.368849    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:18:53.368866    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:18:53.953180    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:18:53.953329    5995 controller.go:557] controller loop complete\nI0616 16:19:03.954509    5995 controller.go:189] starting controller iteration\nI0616 16:19:03.954693    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:03.955024    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:19:03.955150    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:03.955870    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:19:03.970270    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:03.970361    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:19:03.970406    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:03.971419    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:03.971583    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:03.971769    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:03.972342    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:03.972484    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:19:04.472495    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:19:04.591147    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:04.591224    5995 controller.go:557] controller loop complete\nI0616 16:19:04.734429    5995 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:19:04.801196    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:04.801422    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:14.592525    5995 controller.go:189] starting controller iteration\nI0616 16:19:14.592555    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:14.592967    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:19:14.593133    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:14.593604    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:19:14.609311    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:14.609452    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:19:14.609680    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:14.609989    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:14.610006    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:14.610173    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:14.610340    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:14.610415    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:19:15.185805    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:15.185882    5995 controller.go:557] controller loop complete\nI0616 16:19:25.187978    5995 controller.go:189] starting controller iteration\nI0616 16:19:25.188007    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:25.188270    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:19:25.188427    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:25.189012    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:19:25.205093    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:25.205180    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:19:25.205197    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:25.205395    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:25.205504    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:25.205567    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:25.205693    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:25.205803    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:19:25.799865    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:25.800022    5995 controller.go:557] controller loop complete\nI0616 16:19:35.801483    5995 controller.go:189] starting controller iteration\nI0616 16:19:35.801514    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:35.801775    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:19:35.801925    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:35.802404    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:19:35.814351    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:35.814444    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:19:35.814670    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:35.814949    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:35.814966    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:35.815085    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:35.815201    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:35.815285    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:19:36.384025    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:36.384096    5995 controller.go:557] controller loop complete\nI0616 16:19:46.385722    5995 controller.go:189] starting controller iteration\nI0616 16:19:46.385884    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:46.386167    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:19:46.386311    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:46.387442    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:19:46.403468    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:46.403571    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:19:46.403588    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:46.403892    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:46.403943    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:46.404033    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:46.404162    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:46.404178    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:19:46.979884    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:46.979958    5995 controller.go:557] controller loop complete\nI0616 16:19:56.981504    5995 controller.go:189] starting controller iteration\nI0616 16:19:56.981534    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:56.981910    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:19:56.982068    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:19:56.982645    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:19:56.996080    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:19:56.996235    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:19:56.996346    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:19:56.996621    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:56.996643    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:19:56.996798    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:19:56.996957    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:19:56.996976    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:19:57.574814    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:19:57.574885    5995 controller.go:557] controller loop complete\nI0616 16:20:04.802192    5995 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0616 16:20:04.931412    5995 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0616 16:20:04.977631    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:04.977916    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:07.576697    5995 controller.go:189] starting controller iteration\nI0616 16:20:07.576777    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:07.577078    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:20:07.577346    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:07.577825    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:20:07.590033    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:07.590132    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:20:07.590197    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:07.590508    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:07.590526    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:07.590607    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:07.590751    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:07.590768    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:20:08.154935    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:08.155152    5995 controller.go:557] controller loop complete\nI0616 16:20:18.156368    5995 controller.go:189] starting controller iteration\nI0616 16:20:18.156418    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:18.156886    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:20:18.157064    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:18.157664    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:20:18.174889    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:18.175219    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:20:18.175400    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:18.175837    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:18.175956    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:18.176125    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:18.176324    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:18.176451    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:20:18.747884    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:18.748090    5995 controller.go:557] controller loop complete\nI0616 16:20:28.749518    5995 controller.go:189] starting controller iteration\nI0616 16:20:28.749680    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:28.749990    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:20:28.750302    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:28.750769    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:20:28.762748    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:28.762861    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:20:28.762880    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:28.763233    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:28.763273    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:28.763347    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:28.763465    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:28.763503    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:20:29.340280    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:29.340352    5995 controller.go:557] controller loop complete\nI0616 16:20:39.341670    5995 controller.go:189] starting controller iteration\nI0616 16:20:39.341701    5995 controller.go:266] Broadcasting leadership assertion with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:39.342030    5995 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > leadership_token:\"Rz0jPWmmRsH5zlu55pJEHw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" > > \nI0616 16:20:39.342178    5995 controller.go:295] I am leader with token \"Rz0jPWmmRsH5zlu55pJEHw\"\nI0616 16:20:39.342817    5995 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001]\nI0616 16:20:39.354886    5995 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.218:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TslPsX61VjKTlDqpujM1aw\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0616 16:20:39.355319    5995 controller.go:303] etcd cluster members: map[4364443402608037142:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"4364443402608037142\"}]\nI0616 16:20:39.357828    5995 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.37.218\" > \nI0616 16:20:39.358278    5995 etcdserver.go:248] updating hosts: map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:39.358295    5995 hosts.go:84] hosts update: primary=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io:[172.20.37.218 172.20.37.218]], final=map[172.20.37.218:[etcd-a.internal.e2e-9c20857a72-da63e.test-cncf-aws.k8s.io]]\nI0616 16:20:39.358347    5995 hosts.go:181] skipping update of unchanged /etc/hosts\nI0616 16:20:39.358507    5995 commands.go:38] not refreshing commands - TTL not hit\nI0616 16:20:39.358646    5995 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-9c20857a72-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0616 16:20:39.933703    5995 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0616 16:20:39.933897    5995 controller.go:557] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-37-218.eu-west-1.compute.internal ====\n==== START logs for container kops-controller of pod kube-system/kops-controller-qtzrq ====\nI0616 16:16:28.608814       1 deleg.go:130] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\"  \"addr\"=\":0\"\nI0616 16:16:28.616525       1 deleg.go:130] setup \"msg\"=\"starting manager\"  \nI0616 16:16:28.616749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kops-controller-leader...\nI0616 16:16:28.617066       1 internal.go:393] controller-runtime/manager \"msg\"=\"starting metrics server\"  \"path\"=\"/metrics\"\nE0616 16:16:28.629100       1 event.go:329] Could not construct reference to: '&v1.Lease{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kops-controller-leader\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a1730c47-ea16-4e08-9e92-9723c9ceb1c4\", ResourceVersion:\"532\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759456988, loc:(*time.Location)(0x45e9880)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kops-controller\", Operation:\"Update\", APIVersion:\"coordination.k8s.io/v1\", Time:(*v1.Time)(0xc000399098), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0003990b0)}}}, Spec:v1.LeaseSpec{HolderIdentity:(*string)(nil), LeaseDurationSeconds:(*int32)(nil), AcquireTime:(*v1.MicroTime)(nil), RenewTime:(*v1.MicroTime)(nil), LeaseTransitions:(*int32)(nil)}}' due to: 'no kind is registered for the type v1.Lease in scheme \"cmd/kops-controller/main.go:48\"'. Will not report event: 'Normal' 'LeaderElection' 'ip-172-20-37-218_4f13ad1f-f04a-4879-ab86-e9e99514eaf2 became leader'\nI0616 16:16:28.629178       1 leaderelection.go:253] successfully acquired lease kube-system/kops-controller-leader\nI0616 16:16:28.629718       1 controller.go:165] controller-runtime/manager/controller/node \"msg\"=\"Starting EventSource\" \"reconciler group\"=\"\" \"reconciler kind\"=\"Node\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{},\"status\":{\"daemonEndpoints\":{\"kubeletEndpoint\":{\"Port\":0}},\"nodeInfo\":{\"machineID\":\"\",\"systemUUID\":\"\",\"bootID\":\"\",\"kernelVersion\":\"\",\"osImage\":\"\",\"containerRuntimeVersion\":\"\",\"kubeletVersion\":\"\",\"kubeProxyVersion\":\"\",\"operatingSystem\":\"\",\"architecture\":\"\"}}}}\nI0616 16:16:28.629747       1 controller.go:173] controller-runtime/manager/controller/node \"msg\"=\"Starting Controller\" \"reconciler group\"=\"\" \"reconciler kind\"=\"Node\" \nI0616 16:16:28.629885       1 recorder.go:104] controller-runtime/manager/events \"msg\"=\"Normal\"  \"message\"=\"ip-172-20-37-218_4f13ad1f-f04a-4879-ab86-e9e99514eaf2 became leader\" \"object\"={\"kind\":\"ConfigMap\",\"namespace\":\"kube-system\",\"name\":\"kops-controller-leader\",\"uid\":\"71db8333-1d74-4f7d-9a24-66e1c331cd3d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"531\"} \"reason\"=\"LeaderElection\"\nI0616 16:16:28.630036       1 reflector.go:219] Starting reflector *v1.Node (10h52m51.665433924s) from pkg/cache/internal/informers_map.go:241\nI0616 16:16:28.730315       1 controller.go:207] controller-runtime/manager/controller/node \"msg\"=\"Starting workers\" \"reconciler group\"=\"\" \"reconciler kind\"=\"Node\" \"worker count\"=1\nI0616 16:16:28.836012       1 node_controller.go:142] sending patch for node \"ip-172-20-37-218.eu-west-1.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"master-eu-west-1a\\\"}}}\"\nI0616 16:16:44.595424       1 server.go:167] bootstrap 172.20.57.162:46662 ip-172-20-57-162.eu-west-1.compute.internal success\nI0616 16:16:44.847763       1 server.go:167] bootstrap 172.20.58.250:59924 ip-172-20-58-250.eu-west-1.compute.internal success\nI0616 16:16:48.079457       1 server.go:167] bootstrap 172.20.52.203:51780 ip-172-20-52-203.eu-west-1.compute.internal success\nI0616 16:16:50.211512       1 server.go:167] bootstrap 172.20.62.139:47062 ip-172-20-62-139.eu-west-1.compute.internal success\nI0616 16:17:16.152227       1 node_controller.go:142] sending patch for node \"ip-172-20-57-162.eu-west-1.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-1a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\nI0616 16:17:16.554677       1 node_controller.go:142] sending patch for node \"ip-172-20-58-250.eu-west-1.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-1a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\nI0616 16:17:19.785152       1 node_controller.go:142] sending patch for node \"ip-172-20-52-203.eu-west-1.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-1a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\nI0616 16:17:22.307731       1 node_controller.go:142] sending patch for node \"ip-172-20-62-139.eu-west-1.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-1a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\n==== END logs for container kops-controller of pod kube-system/kops-controller-qtzrq ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-ip-172-20-37-218.eu-west-1.compute.internal ====\nFlag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.\nI0616 16:15:12.440193       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0616 16:15:12.440320       1 flags.go:59] FLAG: --address=\"127.0.0.1\"\nI0616 16:15:12.440328       1 flags.go:59] FLAG: --admission-control=\"[]\"\nI0616 16:15:12.440338       1 flags.go:59] FLAG: --admission-control-config-file=\"\"\nI0616 16:15:12.440343       1 flags.go:59] FLAG: --advertise-address=\"<nil>\"\nI0616 16:15:12.440347       1 flags.go:59] FLAG: --allow-metric-labels=\"[]\"\nI0616 16:15:12.440361       1 flags.go:59] FLAG: --allow-privileged=\"true\"\nI0616 16:15:12.440367       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0616 16:15:12.440371       1 flags.go:59] FLAG: --anonymous-auth=\"false\"\nI0616 16:15:12.440397       1 flags.go:59] FLAG: --api-audiences=\"[kubernetes.svc.default]\"\nI0616 16:15:12.440403       1 flags.go:59] FLAG: --apiserver-count=\"1\"\nI0616 16:15:12.440409       1 flags.go:59] FLAG: --audit-log-batch-buffer-size=\"10000\"\nI0616 16:15:12.440413       1 flags.go:59] FLAG: --audit-log-batch-max-size=\"1\"\nI0616 16:15:12.440420       1 flags.go:59] FLAG: --audit-log-batch-max-wait=\"0s\"\nI0616 16:15:12.440426       1 flags.go:59] FLAG: --audit-log-batch-throttle-burst=\"0\"\nI0616 16:15:12.440430       1 flags.go:59] FLAG: --audit-log-batch-throttle-enable=\"false\"\nI0616 16:15:12.440433       1 flags.go:59] FLAG: --audit-log-batch-throttle-qps=\"0\"\nI0616 16:15:12.440439       1 flags.go:59] FLAG: --audit-log-compress=\"false\"\nI0616 16:15:12.440442       1 flags.go:59] FLAG: --audit-log-format=\"json\"\nI0616 16:15:12.440447       1 flags.go:59] FLAG: --audit-log-maxage=\"0\"\nI0616 16:15:12.440450       1 flags.go:59] FLAG: --audit-log-maxbackup=\"0\"\nI0616 16:15:12.440454       1 flags.go:59] FLAG: --audit-log-maxsize=\"0\"\nI0616 16:15:12.440457       1 flags.go:59] FLAG: --audit-log-mode=\"blocking\"\nI0616 16:15:12.440461       1 flags.go:59] FLAG: --audit-log-path=\"\"\nI0616 16:15:12.440465       1 flags.go:59] FLAG: --audit-log-truncate-enabled=\"false\"\nI0616 16:15:12.440469       1 flags.go:59] FLAG: --audit-log-truncate-max-batch-size=\"10485760\"\nI0616 16:15:12.440478       1 flags.go:59] FLAG: --audit-log-truncate-max-event-size=\"102400\"\nI0616 16:15:12.440483       1 flags.go:59] FLAG: --audit-log-version=\"audit.k8s.io/v1\"\nI0616 16:15:12.440487       1 flags.go:59] FLAG: --audit-policy-file=\"\"\nI0616 16:15:12.440491       1 flags.go:59] FLAG: --audit-webhook-batch-buffer-size=\"10000\"\nI0616 16:15:12.440495       1 flags.go:59] FLAG: --audit-webhook-batch-initial-backoff=\"10s\"\nI0616 16:15:12.440499       1 flags.go:59] FLAG: --audit-webhook-batch-max-size=\"400\"\nI0616 16:15:12.440512       1 flags.go:59] FLAG: --audit-webhook-batch-max-wait=\"30s\"\nI0616 16:15:12.440516       1 flags.go:59] FLAG: --audit-webhook-batch-throttle-burst=\"15\"\nI0616 16:15:12.440520       1 flags.go:59] FLAG: --audit-webhook-batch-throttle-enable=\"true\"\nI0616 16:15:12.440524       1 flags.go:59] FLAG: --audit-webhook-batch-throttle-qps=\"10\"\nI0616 16:15:12.440528       1 flags.go:59] FLAG: --audit-webhook-config-file=\"\"\nI0616 16:15:12.440531       1 flags.go:59] FLAG: --audit-webhook-initial-backoff=\"10s\"\nI0616 16:15:12.440536       1 flags.go:59] FLAG: --audit-webhook-mode=\"batch\"\nI0616 16:15:12.440540       1 flags.go:59] FLAG: --audit-webhook-truncate-enabled=\"false\"\nI0616 16:15:12.440543       1 flags.go:59] FLAG: --audit-webhook-truncate-max-batch-size=\"10485760\"\nI0616 16:15:12.440547       1 flags.go:59] FLAG: --audit-webhook-truncate-max-event-size=\"102400\"\nI0616 16:15:12.440551       1 flags.go:59] FLAG: --audit-webhook-version=\"audit.k8s.io/v1\"\nI0616 16:15:12.440556       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"2m0s\"\nI0616 16:15:12.440560       1 flags.go:59] FLAG: --authentication-token-webhook-config-file=\"\"\nI0616 16:15:12.440564       1 flags.go:59] FLAG: --authentication-token-webhook-version=\"v1beta1\"\nI0616 16:15:12.440568       1 flags.go:59] FLAG: --authorization-mode=\"[Node,RBAC]\"\nI0616 16:15:12.440582       1 flags.go:59] FLAG: --authorization-policy-file=\"\"\nI0616 16:15:12.440586       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"5m0s\"\nI0616 16:15:12.440593       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"30s\"\nI0616 16:15:12.440597       1 flags.go:59] FLAG: --authorization-webhook-config-file=\"\"\nI0616 16:15:12.440600       1 flags.go:59] FLAG: --authorization-webhook-version=\"v1beta1\"\nI0616 16:15:12.440604       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0616 16:15:12.440608       1 flags.go:59] FLAG: --cert-dir=\"/var/run/kubernetes\"\nI0616 16:15:12.440613       1 flags.go:59] FLAG: --client-ca-file=\"/srv/kubernetes/ca.crt\"\nI0616 16:15:12.440617       1 flags.go:59] FLAG: --cloud-config=\"/etc/kubernetes/cloud.config\"\nI0616 16:15:12.440622       1 flags.go:59] FLAG: --cloud-provider=\"aws\"\nI0616 16:15:12.440626       1 flags.go:59] FLAG: --cloud-provider-gce-l7lb-src-cidrs=\"130.211.0.0/22,35.191.0.0/16\"\nI0616 16:15:12.440633       1 flags.go:59] FLAG: --cloud-provider-gce-lb-src-cidrs=\"130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16\"\nI0616 16:15:12.440640       1 flags.go:59] FLAG: --contention-profiling=\"false\"\nI0616 16:15:12.440644       1 flags.go:59] FLAG: --cors-allowed-origins=\"[]\"\nI0616 16:15:12.440649       1 flags.go:59] FLAG: --default-not-ready-toleration-seconds=\"300\"\nI0616 16:15:12.440653       1 flags.go:59] FLAG: --default-unreachable-toleration-seconds=\"300\"\nI0616 16:15:12.440657       1 flags.go:59] FLAG: --default-watch-cache-size=\"100\"\nI0616 16:15:12.440661       1 flags.go:59] FLAG: --delete-collection-workers=\"1\"\nI0616 16:15:12.440665       1 flags.go:59] FLAG: --deserialization-cache-size=\"0\"\nI0616 16:15:12.440669       1 flags.go:59] FLAG: --disable-admission-plugins=\"[]\"\nI0616 16:15:12.440680       1 flags.go:59] FLAG: --disabled-metrics=\"[]\"\nI0616 16:15:12.440685       1 flags.go:59] FLAG: --egress-selector-config-file=\"\"\nI0616 16:15:12.440689       1 flags.go:59] FLAG: --enable-admission-plugins=\"[NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota]\"\nI0616 16:15:12.440706       1 flags.go:59] FLAG: --enable-aggregator-routing=\"false\"\nI0616 16:15:12.440717       1 flags.go:59] FLAG: --enable-bootstrap-token-auth=\"false\"\nI0616 16:15:12.440721       1 flags.go:59] FLAG: --enable-garbage-collector=\"true\"\nI0616 16:15:12.440725       1 flags.go:59] FLAG: --enable-logs-handler=\"true\"\nI0616 16:15:12.440728       1 flags.go:59] FLAG: --enable-priority-and-fairness=\"true\"\nI0616 16:15:12.440732       1 flags.go:59] FLAG: --enable-swagger-ui=\"false\"\nI0616 16:15:12.440736       1 flags.go:59] FLAG: --encryption-provider-config=\"\"\nI0616 16:15:12.440740       1 flags.go:59] FLAG: --endpoint-reconciler-type=\"lease\"\nI0616 16:15:12.440744       1 flags.go:59] FLAG: --etcd-cafile=\"/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt\"\nI0616 16:15:12.440749       1 flags.go:59] FLAG: --etcd-certfile=\"/etc/kubernetes/pki/kube-apiserver/etcd-client.crt\"\nI0616 16:15:12.440754       1 flags.go:59] FLAG: --etcd-compaction-interval=\"5m0s\"\nI0616 16:15:12.440758       1 flags.go:59] FLAG: --etcd-count-metric-poll-period=\"1m0s\"\nI0616 16:15:12.440763       1 flags.go:59] FLAG: --etcd-db-metric-poll-interval=\"30s\"\nI0616 16:15:12.440766       1 flags.go:59] FLAG: --etcd-healthcheck-timeout=\"2s\"\nI0616 16:15:12.440770       1 flags.go:59] FLAG: --etcd-keyfile=\"/etc/kubernetes/pki/kube-apiserver/etcd-client.key\"\nI0616 16:15:12.440776       1 flags.go:59] FLAG: --etcd-prefix=\"/registry\"\nI0616 16:15:12.440780       1 flags.go:59] FLAG: --etcd-servers=\"[https://127.0.0.1:4001]\"\nI0616 16:15:12.440785       1 flags.go:59] FLAG: --etcd-servers-overrides=\"[/events#https://127.0.0.1:4002]\"\nI0616 16:15:12.440797       1 flags.go:59] FLAG: --event-ttl=\"1h0m0s\"\nI0616 16:15:12.440801       1 flags.go:59] FLAG: --experimental-encryption-provider-config=\"\"\nI0616 16:15:12.440805       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0616 16:15:12.440809       1 flags.go:59] FLAG: --external-hostname=\"\"\nI0616 16:15:12.440813       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0616 16:15:12.440822       1 flags.go:59] FLAG: --goaway-chance=\"0\"\nI0616 16:15:12.440828       1 flags.go:59] FLAG: --help=\"false\"\nI0616 16:15:12.440832       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0616 16:15:12.440836       1 flags.go:59] FLAG: --identity-lease-dur