This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-05-25 16:11
Elapsed32m32s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0525 16:12:40.151852    4014 up.go:43] Cleaning up any leaked resources from previous cluster
I0525 16:12:40.151893    4014 dumplogs.go:38] /logs/artifacts/dcd55355-bd73-11eb-9751-96b6e925aefe/kops toolbox dump --name e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0525 16:12:40.173177    4035 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0525 16:12:40.174084    4035 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-62197fe92d-da63e.test-cncf-aws.k8s.io" not found
W0525 16:12:40.686942    4014 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0525 16:12:40.686999    4014 down.go:48] /logs/artifacts/dcd55355-bd73-11eb-9751-96b6e925aefe/kops delete cluster --name e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --yes
I0525 16:12:40.704885    4045 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0525 16:12:40.705059    4045 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-62197fe92d-da63e.test-cncf-aws.k8s.io" not found
I0525 16:12:41.185891    4014 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/05/25 16:12:41 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0525 16:12:41.194230    4014 http.go:37] curl https://ip.jsb.workers.dev
I0525 16:12:41.295261    4014 up.go:144] /logs/artifacts/dcd55355-bd73-11eb-9751-96b6e925aefe/kops create cluster --name e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210518 --channel=alpha --networking=flannel --container-runtime=docker --admin-access 34.122.65.185/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0525 16:12:41.310870    4055 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0525 16:12:41.311084    4055 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0525 16:12:41.359511    4055 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0525 16:12:41.865400    4055 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0525 16:13:07.423070    4014 up.go:181] /logs/artifacts/dcd55355-bd73-11eb-9751-96b6e925aefe/kops validate cluster --name e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0525 16:13:07.441887    4075 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0525 16:13:07.442208    4075 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-62197fe92d-da63e.test-cncf-aws.k8s.io

W0525 16:13:08.766146    4075 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:13:18.803455    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:13:28.852083    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:13:38.911432    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:13:49.093103    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:13:59.141711    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:14:09.189819    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:14:19.236636    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:14:29.310189    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:14:39.349750    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:14:49.383052    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:14:59.415904    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:15:09.449446    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:15:19.498028    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:15:29.545514    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:15:39.574058    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:15:49.619152    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:15:59.649287    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:16:09.682152    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:16:19.713605    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:16:29.765700    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:16:39.798103    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0525 16:16:49.850596    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 7 lines ...
Machine	i-06a2799853e82ae0c				machine "i-06a2799853e82ae0c" has not yet joined cluster
Machine	i-0a7c3c791c0fdca32				machine "i-0a7c3c791c0fdca32" has not yet joined cluster
Machine	i-0efa7307227f7c770				machine "i-0efa7307227f7c770" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-llsrv	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-llsrv" is pending
Pod	kube-system/coredns-f45c4bf76-xv9v7		system-cluster-critical pod "coredns-f45c4bf76-xv9v7" is pending

Validation Failed
W0525 16:17:02.689366    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 7 lines ...
Machine	i-06a2799853e82ae0c				machine "i-06a2799853e82ae0c" has not yet joined cluster
Machine	i-0a7c3c791c0fdca32				machine "i-0a7c3c791c0fdca32" has not yet joined cluster
Machine	i-0efa7307227f7c770				machine "i-0efa7307227f7c770" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-llsrv	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-llsrv" is pending
Pod	kube-system/coredns-f45c4bf76-xv9v7		system-cluster-critical pod "coredns-f45c4bf76-xv9v7" is pending

Validation Failed
W0525 16:17:14.601412    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 9 lines ...
Machine	i-0efa7307227f7c770				machine "i-0efa7307227f7c770" has not yet joined cluster
Node	ip-172-20-40-186.eu-west-3.compute.internal	node "ip-172-20-40-186.eu-west-3.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-autoscaler-6f594f4c58-llsrv	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-llsrv" is pending
Pod	kube-system/coredns-f45c4bf76-xv9v7		system-cluster-critical pod "coredns-f45c4bf76-xv9v7" is pending
Pod	kube-system/kube-flannel-ds-dfzmv		system-node-critical pod "kube-flannel-ds-dfzmv" is pending

Validation Failed
W0525 16:17:26.473839    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 14 lines ...
Pod	kube-system/coredns-autoscaler-6f594f4c58-llsrv	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-llsrv" is pending
Pod	kube-system/coredns-f45c4bf76-xv9v7		system-cluster-critical pod "coredns-f45c4bf76-xv9v7" is pending
Pod	kube-system/kube-flannel-ds-78qfq		system-node-critical pod "kube-flannel-ds-78qfq" is pending
Pod	kube-system/kube-flannel-ds-r8x62		system-node-critical pod "kube-flannel-ds-r8x62" is pending
Pod	kube-system/kube-flannel-ds-vk4m7		system-node-critical pod "kube-flannel-ds-vk4m7" is pending

Validation Failed
W0525 16:17:38.418753    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/coredns-f45c4bf76-9lfbv					system-cluster-critical pod "coredns-f45c4bf76-9lfbv" is pending
Pod	kube-system/kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal" is pending

Validation Failed
W0525 16:17:50.354745    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 6 lines ...
ip-172-20-60-66.eu-west-3.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-f45c4bf76-9lfbv	system-cluster-critical pod "coredns-f45c4bf76-9lfbv" is pending

Validation Failed
W0525 16:18:02.336703    4075 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 197 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 974 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:33.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2352" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:33.817: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:34.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8482" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:34.978: INFO: Driver local doesn't support ext4 -- skipping
... skipping 70 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 29 lines ...
May 25 16:20:34.344: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 16:20:34.448: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
May 25 16:20:34.761: INFO: Waiting up to 5m0s for pod "security-context-cb779034-3f20-4955-ad41-4ea207f3c494" in namespace "security-context-1033" to be "Succeeded or Failed"
May 25 16:20:34.864: INFO: Pod "security-context-cb779034-3f20-4955-ad41-4ea207f3c494": Phase="Pending", Reason="", readiness=false. Elapsed: 103.571586ms
May 25 16:20:36.969: INFO: Pod "security-context-cb779034-3f20-4955-ad41-4ea207f3c494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207919801s
May 25 16:20:39.072: INFO: Pod "security-context-cb779034-3f20-4955-ad41-4ea207f3c494": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311704414s
May 25 16:20:41.177: INFO: Pod "security-context-cb779034-3f20-4955-ad41-4ea207f3c494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416164172s
STEP: Saw pod success
May 25 16:20:41.177: INFO: Pod "security-context-cb779034-3f20-4955-ad41-4ea207f3c494" satisfied condition "Succeeded or Failed"
May 25 16:20:41.280: INFO: Trying to get logs from node ip-172-20-40-186.eu-west-3.compute.internal pod security-context-cb779034-3f20-4955-ad41-4ea207f3c494 container test-container: <nil>
STEP: delete the pod
May 25 16:20:41.497: INFO: Waiting for pod security-context-cb779034-3f20-4955-ad41-4ea207f3c494 to disappear
May 25 16:20:41.601: INFO: Pod security-context-cb779034-3f20-4955-ad41-4ea207f3c494 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.332 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:42.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9502" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:42.911: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
May 25 16:20:34.636: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 25 16:20:34.636: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-x6r6
STEP: Creating a pod to test subpath
May 25 16:20:34.742: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-x6r6" in namespace "provisioning-5451" to be "Succeeded or Failed"
May 25 16:20:34.844: INFO: Pod "pod-subpath-test-inlinevolume-x6r6": Phase="Pending", Reason="", readiness=false. Elapsed: 101.972826ms
May 25 16:20:36.947: INFO: Pod "pod-subpath-test-inlinevolume-x6r6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204474486s
May 25 16:20:39.052: INFO: Pod "pod-subpath-test-inlinevolume-x6r6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309584932s
May 25 16:20:41.154: INFO: Pod "pod-subpath-test-inlinevolume-x6r6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412132484s
May 25 16:20:43.264: INFO: Pod "pod-subpath-test-inlinevolume-x6r6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.522163213s
STEP: Saw pod success
May 25 16:20:43.264: INFO: Pod "pod-subpath-test-inlinevolume-x6r6" satisfied condition "Succeeded or Failed"
May 25 16:20:43.367: INFO: Trying to get logs from node ip-172-20-40-186.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-x6r6 container test-container-subpath-inlinevolume-x6r6: <nil>
STEP: delete the pod
May 25 16:20:43.601: INFO: Waiting for pod pod-subpath-test-inlinevolume-x6r6 to disappear
May 25 16:20:43.703: INFO: Pod pod-subpath-test-inlinevolume-x6r6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-x6r6
May 25 16:20:43.703: INFO: Deleting pod "pod-subpath-test-inlinevolume-x6r6" in namespace "provisioning-5451"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
May 25 16:20:33.636: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
May 25 16:20:33.947: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007" in namespace "security-context-test-1168" to be "Succeeded or Failed"
May 25 16:20:34.051: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": Phase="Pending", Reason="", readiness=false. Elapsed: 103.383849ms
May 25 16:20:36.155: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207210429s
May 25 16:20:38.258: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310917215s
May 25 16:20:40.363: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415649164s
May 25 16:20:42.468: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52056833s
May 25 16:20:44.572: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.624569416s
May 25 16:20:44.572: INFO: Pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007" satisfied condition "Succeeded or Failed"
May 25 16:20:44.685: INFO: Got logs for pod "busybox-privileged-false-2bcb6ace-84c7-49bd-9b21-d524e2634007": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:44.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1168" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:45.009: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
STEP: watching for Pod to be ready
May 25 16:20:33.563: INFO: observed Pod pod-test in namespace pods-6035 in phase Pending with labels: map[test-pod-static:true] & conditions []
May 25 16:20:33.563: INFO: observed Pod pod-test in namespace pods-6035 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC  }]
May 25 16:20:33.563: INFO: observed Pod pod-test in namespace pods-6035 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC  }]
May 25 16:20:39.024: INFO: Found Pod pod-test in namespace pods-6035 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 16:20:33 +0000 UTC  }]
STEP: patching the Pod with a new Label and updated data
May 25 16:20:39.234: INFO: observed event type ERROR
May 25 16:20:39.234: FAIL: failed to see MODIFIED event
Unexpected error:
    <*errors.errorString | 0xc0006d87d0>: {
        s: "watch closed before UntilWithoutRetry timeout",
    }
    watch closed before UntilWithoutRetry timeout
occurred

... skipping 213 lines ...
• Failure [13.051 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  May 25 16:20:39.234: failed to see MODIFIED event
  Unexpected error:
      <*errors.errorString | 0xc0006d87d0>: {
          s: "watch closed before UntilWithoutRetry timeout",
      }
      watch closed before UntilWithoutRetry timeout
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:984
------------------------------
{"msg":"FAILED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":0,"skipped":0,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:45.582: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 44 lines ...
• [SLOW TEST:14.446 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:47.070: INFO: Driver "nfs" does not support FsGroup - skipping
... skipping 31 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
May 25 16:20:33.246: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 25 16:20:33.457: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-rlgj
STEP: Creating a pod to test subpath
May 25 16:20:33.565: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-rlgj" in namespace "provisioning-3707" to be "Succeeded or Failed"
May 25 16:20:33.668: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Pending", Reason="", readiness=false. Elapsed: 103.239465ms
May 25 16:20:35.773: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207463144s
May 25 16:20:37.877: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312195597s
May 25 16:20:39.990: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42513515s
May 25 16:20:42.135: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569390095s
May 25 16:20:44.244: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.679141045s
May 25 16:20:46.349: INFO: Pod "pod-subpath-test-inlinevolume-rlgj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.783994039s
STEP: Saw pod success
May 25 16:20:46.349: INFO: Pod "pod-subpath-test-inlinevolume-rlgj" satisfied condition "Succeeded or Failed"
May 25 16:20:46.456: INFO: Trying to get logs from node ip-172-20-48-192.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-rlgj container test-container-subpath-inlinevolume-rlgj: <nil>
STEP: delete the pod
May 25 16:20:46.672: INFO: Waiting for pod pod-subpath-test-inlinevolume-rlgj to disappear
May 25 16:20:46.775: INFO: Pod pod-subpath-test-inlinevolume-rlgj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-rlgj
May 25 16:20:46.775: INFO: Deleting pod "pod-subpath-test-inlinevolume-rlgj" in namespace "provisioning-3707"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:47.200: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
May 25 16:20:33.995: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-2546c98b-1353-4df7-8fa4-28d5b2894bdb
STEP: Creating a pod to test consume secrets
May 25 16:20:34.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a" in namespace "projected-2817" to be "Succeeded or Failed"
May 25 16:20:34.542: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 106.026929ms
May 25 16:20:36.647: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210596911s
May 25 16:20:38.751: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314825578s
May 25 16:20:40.855: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418736847s
May 25 16:20:42.960: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523174994s
May 25 16:20:45.065: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.628660314s
May 25 16:20:47.170: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.733385446s
STEP: Saw pod success
May 25 16:20:47.170: INFO: Pod "pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a" satisfied condition "Succeeded or Failed"
May 25 16:20:47.274: INFO: Trying to get logs from node ip-172-20-60-66.eu-west-3.compute.internal pod pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a container projected-secret-volume-test: <nil>
STEP: delete the pod
May 25 16:20:47.507: INFO: Waiting for pod pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a to disappear
May 25 16:20:47.612: INFO: Pod pod-projected-secrets-e9b6255d-731e-4097-9294-f7490766dc3a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.351 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:47.948: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 50 lines ...
May 25 16:20:33.482: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 16:20:33.584: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
May 25 16:20:33.788: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 25 16:20:34.100: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3576" in namespace "provisioning-3576" to be "Succeeded or Failed"
May 25 16:20:34.212: INFO: Pod "hostpath-symlink-prep-provisioning-3576": Phase="Pending", Reason="", readiness=false. Elapsed: 111.53114ms
May 25 16:20:36.314: INFO: Pod "hostpath-symlink-prep-provisioning-3576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213769712s
May 25 16:20:38.418: INFO: Pod "hostpath-symlink-prep-provisioning-3576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316976595s
May 25 16:20:40.520: INFO: Pod "hostpath-symlink-prep-provisioning-3576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.419492291s
STEP: Saw pod success
May 25 16:20:40.520: INFO: Pod "hostpath-symlink-prep-provisioning-3576" satisfied condition "Succeeded or Failed"
May 25 16:20:40.520: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3576" in namespace "provisioning-3576"
May 25 16:20:40.626: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3576" to be fully deleted
May 25 16:20:40.728: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jvzb
STEP: Creating a pod to test subpath
May 25 16:20:40.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jvzb" in namespace "provisioning-3576" to be "Succeeded or Failed"
May 25 16:20:40.934: INFO: Pod "pod-subpath-test-inlinevolume-jvzb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.43695ms
May 25 16:20:43.041: INFO: Pod "pod-subpath-test-inlinevolume-jvzb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209149129s
May 25 16:20:45.146: INFO: Pod "pod-subpath-test-inlinevolume-jvzb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314027115s
May 25 16:20:47.248: INFO: Pod "pod-subpath-test-inlinevolume-jvzb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416415263s
STEP: Saw pod success
May 25 16:20:47.248: INFO: Pod "pod-subpath-test-inlinevolume-jvzb" satisfied condition "Succeeded or Failed"
May 25 16:20:47.350: INFO: Trying to get logs from node ip-172-20-54-92.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-jvzb container test-container-volume-inlinevolume-jvzb: <nil>
STEP: delete the pod
May 25 16:20:47.565: INFO: Waiting for pod pod-subpath-test-inlinevolume-jvzb to disappear
May 25 16:20:47.667: INFO: Pod pod-subpath-test-inlinevolume-jvzb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jvzb
May 25 16:20:47.667: INFO: Deleting pod "pod-subpath-test-inlinevolume-jvzb" in namespace "provisioning-3576"
STEP: Deleting pod
May 25 16:20:47.769: INFO: Deleting pod "pod-subpath-test-inlinevolume-jvzb" in namespace "provisioning-3576"
May 25 16:20:47.978: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3576" in namespace "provisioning-3576" to be "Succeeded or Failed"
May 25 16:20:48.080: INFO: Pod "hostpath-symlink-prep-provisioning-3576": Phase="Pending", Reason="", readiness=false. Elapsed: 102.665159ms
May 25 16:20:50.184: INFO: Pod "hostpath-symlink-prep-provisioning-3576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205918298s
STEP: Saw pod success
May 25 16:20:50.184: INFO: Pod "hostpath-symlink-prep-provisioning-3576" satisfied condition "Succeeded or Failed"
May 25 16:20:50.184: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3576" in namespace "provisioning-3576"
May 25 16:20:50.290: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3576" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:50.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3576" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:50.615: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-node] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 16:20:48.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:50.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1262" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:51.218: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 44 lines ...
May 25 16:20:41.800: INFO: PersistentVolumeClaim pvc-v4tsk found but phase is Pending instead of Bound.
May 25 16:20:43.905: INFO: PersistentVolumeClaim pvc-v4tsk found and phase=Bound (2.209050824s)
May 25 16:20:43.905: INFO: Waiting up to 3m0s for PersistentVolume local-psfms to have phase Bound
May 25 16:20:44.012: INFO: PersistentVolume local-psfms found and phase=Bound (107.344123ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-87x2
STEP: Creating a pod to test subpath
May 25 16:20:44.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-87x2" in namespace "provisioning-3625" to be "Succeeded or Failed"
May 25 16:20:44.438: INFO: Pod "pod-subpath-test-preprovisionedpv-87x2": Phase="Pending", Reason="", readiness=false. Elapsed: 106.4155ms
May 25 16:20:46.544: INFO: Pod "pod-subpath-test-preprovisionedpv-87x2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212065426s
May 25 16:20:48.655: INFO: Pod "pod-subpath-test-preprovisionedpv-87x2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323919171s
May 25 16:20:50.763: INFO: Pod "pod-subpath-test-preprovisionedpv-87x2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431153314s
STEP: Saw pod success
May 25 16:20:50.763: INFO: Pod "pod-subpath-test-preprovisionedpv-87x2" satisfied condition "Succeeded or Failed"
May 25 16:20:50.868: INFO: Trying to get logs from node ip-172-20-54-92.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-87x2 container test-container-volume-preprovisionedpv-87x2: <nil>
STEP: delete the pod
May 25 16:20:51.156: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-87x2 to disappear
May 25 16:20:51.301: INFO: Pod pod-subpath-test-preprovisionedpv-87x2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-87x2
May 25 16:20:51.301: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-87x2" in namespace "provisioning-3625"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:54.403: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-85b617a1-551d-4f4b-b340-be665401f98c
STEP: Creating a pod to test consume secrets
May 25 16:20:35.804: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66" in namespace "projected-4153" to be "Succeeded or Failed"
May 25 16:20:35.908: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 104.173992ms
May 25 16:20:38.014: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209362809s
May 25 16:20:40.118: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314248178s
May 25 16:20:42.222: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417970838s
May 25 16:20:44.338: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534129007s
May 25 16:20:46.444: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.639664863s
May 25 16:20:48.548: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.744100654s
May 25 16:20:50.655: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.851115256s
May 25 16:20:52.759: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Pending", Reason="", readiness=false. Elapsed: 16.954934423s
May 25 16:20:54.863: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.058828742s
STEP: Saw pod success
May 25 16:20:54.863: INFO: Pod "pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66" satisfied condition "Succeeded or Failed"
May 25 16:20:54.966: INFO: Trying to get logs from node ip-172-20-54-92.eu-west-3.compute.internal pod pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 25 16:20:55.185: INFO: Waiting for pod pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66 to disappear
May 25 16:20:55.288: INFO: Pod pod-projected-secrets-01a15756-afcc-4788-8e4c-1f260ebcaa66 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:20.429 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:55.514: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
May 25 16:20:44.854: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f" in namespace "security-context-test-6065" to be "Succeeded or Failed"
May 25 16:20:44.956: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f": Phase="Pending", Reason="", readiness=false. Elapsed: 102.070707ms
May 25 16:20:47.059: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204886596s
May 25 16:20:49.162: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308411434s
May 25 16:20:51.298: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444714037s
May 25 16:20:53.405: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551377681s
May 25 16:20:55.508: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.654037841s
May 25 16:20:55.508: INFO: Pod "alpine-nnp-true-b1f2882c-c7b5-434b-9cb3-f0f17c7a763f" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:55.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6065" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:55.849: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 80 lines ...
STEP: Destroying namespace "apply-9877" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":3,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 25 16:20:57.499: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 16:20:59.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8532" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 97 lines ...
• [SLOW TEST:12.466 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
May 25 16:20:54.965: INFO: Running '/tmp/kubectl966515883/kubectl --server=https://api.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4328 cluster-info dump'
May 25 16:21:00.322: INFO: stderr: ""
May 25 16:21:00.324: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2284\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"uid\": \"563f57ed-84c9-484c-9884-8d94874389be\",\n                \"resourceVersion\": \"954\",\n                \"creationTimestamp\": \"2021-05-25T16:17:21Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-40-186.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"9e:47:c7:97:6a:27\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.40.186\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-05878d71007c3279f\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3969496Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3867096Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:17:59Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:59Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:31Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:16:57Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:31Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:16:57Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:31Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:16:57Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:31Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:41Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.40.186\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.180.122.224\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-180-122-224.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec20f522f10c4b5e4af8d6fea7d7143b\",\n                    \"systemUUID\": \"ec20f522-f10c-4b5e-4af8-d6fea7d7143b\",\n                    \"bootID\": \"002d8b5f-cd09-41ae-baf7-80460a73c5d4\",\n                    \"kernelVersion\": \"5.4.0-1048-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 43499235\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                \"resourceVersion\": \"510\",\n                \"creationTimestamp\": \"2021-05-25T16:15:40Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"master-eu-west-3a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"3a:1d:a1:a5:01:e1\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.44.17\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-056f852792822a839\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3785172Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3682772Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:16:06Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:16:06Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:16:21Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:34Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:16:21Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:34Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:16:21Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:34Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:16:21Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:16:11Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.44.17\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.236.146.108\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-236-146-108.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2fff1a06e0467ebca71d39ac3352bc\",\n                    \"systemUUID\": \"ec2fff1a-06e0-467e-bca7-1d39ac3352bc\",\n                    \"bootID\": \"d7dea412-de9b-4a12-9959-f3b574910ae1\",\n                    \"kernelVersion\": \"5.4.0-1048-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\"\n                        ],\n                        \"sizeBytes\": 492748624\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 125612423\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 119825302\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\"\n                        ],\n                        \"sizeBytes\": 112232812\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.2\"\n                        ],\n                        \"sizeBytes\": 110432040\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 50635642\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.2\"\n                        ],\n                        \"sizeBytes\": 24013245\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"uid\": \"1bb49170-eda7-4a2e-bd7a-45d44df4d31a\",\n                \"resourceVersion\": \"2144\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-4029\\\":\\\"ip-172-20-48-192.eu-west-3.compute.internal\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"ea:0c:d4:82:39:2e\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.48.192\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-0a7c3c791c0fdca32\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3969496Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3867096Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:17:39Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:39Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:31Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:31Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:31Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:42Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.48.192\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.237.112.171\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-237-112-171.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec28cafaf381ba434b181af52435b4ce\",\n                    \"systemUUID\": \"ec28cafa-f381-ba43-4b18-1af52435b4ce\",\n                    \"bootID\": \"c6756f85-6cca-4e61-bf87-b6314751ea1e\",\n                    \"kernelVersion\": \"5.4.0-1048-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 43499235\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"uid\": \"50e2e33a-1f01-486d-ac46-8b6be79e3a7e\",\n                \"resourceVersion\": \"872\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-54-92.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"8a:3f:28:0d:d7:b7\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.54.92\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-06a2799853e82ae0c\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3969488Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3867088Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:17:39Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:39Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:32Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:32Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:32Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:02Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:42Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.54.92\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.188.147.0\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-188-147-0.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec243ed750629ea80a0f0db0a93ae573\",\n                    \"systemUUID\": \"ec243ed7-5062-9ea8-0a0f-0db0a93ae573\",\n                    \"bootID\": \"49bec36a-df99-41c5-b1e0-5bac5f2389cf\",\n                    \"kernelVersion\": \"5.4.0-1048-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"uid\": \"bba4ded3-c7fc-4487-ba5b-323df6f0595b\",\n                \"resourceVersion\": \"2205\",\n                \"creationTimestamp\": \"2021-05-25T16:17:30Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-provisioning-2356\\\":\\\"ip-172-20-60-66.eu-west-3.compute.internal\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-data\": \"{\\\"VtepMAC\\\":\\\"4e:0d:db:37:44:30\\\"}\",\n                    \"flannel.alpha.coreos.com/backend-type\": \"vxlan\",\n                    \"flannel.alpha.coreos.com/kube-subnet-manager\": \"true\",\n                    \"flannel.alpha.coreos.com/public-ip\": \"172.20.60.66\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-0efa7307227f7c770\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3969488Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3867088Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:17:38Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:38Z\",\n                        \"reason\": \"FlannelIsUp\",\n                        \"message\": \"Flannel is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:00Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:30Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:00Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:30Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:00Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:30Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-25T16:18:00Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:40Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.60.66\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.180.193.220\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-180-193-220.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2197f60aecdf4f98a359b34146c48c\",\n                    \"systemUUID\": \"ec2197f6-0aec-df4f-98a3-59b34146c48c\",\n                    \"bootID\": \"7e78b16e-2508-4c90-af97-24ea53a1706e\",\n                    \"kernelVersion\": \"5.4.0-1048-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.5\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 130788187\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                            \"quay.io/coreos/flannel:v0.13.0\"\n                        ],\n                        \"sizeBytes\": 57156911\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\"\n                        ],\n                        \"sizeBytes\": 40647382\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-provisioning-2356^2b8996e9-bd75-11eb-b4d9-9e5fb9661a22\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"727\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b718433a12a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"94d7b545-8c87-4c22-b527-a59ce6d3187a\",\n                \"resourceVersion\": \"92\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"423\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:19Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b84a63a9c3b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9dc4ff25-a53e-494b-babc-ba07b7e32f5c\",\n                \"resourceVersion\": \"94\",\n                \"creationTimestamp\": \"2021-05-25T16:17:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b87135bb2c7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6b63c88e-a89b-4b30-ba7d-2f2d680c490c\",\n                \"resourceVersion\": \"101\",\n                \"creationTimestamp\": \"2021-05-25T16:17:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"647\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:31Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b8968221360\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"931866ce-72c7-4beb-aeea-0c5c0f6757fd\",\n                \"resourceVersion\": \"114\",\n                \"creationTimestamp\": \"2021-05-25T16:17:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"698\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-6f594f4c58-llsrv to ip-172-20-60-66.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:41Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b8990e9cce0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7c2c24db-98bf-48e2-aaf4-589b5cd0efd6\",\n                \"resourceVersion\": \"205\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:42Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b89ed59a940\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"821941d7-b86b-4e2a-b05a-b6ccc5b271e0\",\n                \"resourceVersion\": \"208\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\" in 1.550813599s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b89f41bec23\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"faa3a9df-9d1c-4644-914d-afda88b9d6ca\",\n                \"resourceVersion\": \"211\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv.16825b89fa5cb355\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d79a032d-2610-4f4c-a9cb-369b98f86acb\",\n                \"resourceVersion\": \"224\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58.16825b7184be7b0d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"39fec642-ceb3-49c4-bdca-f28b08320dc0\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"uid\": \"a3792740-f566-46df-a970-cf5e57d9bd6b\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"416\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-6f594f4c58-llsrv\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16825b718267f31b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0a0afea6-2097-4b8d-a457-61112f01b121\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"8f8b575d-545d-4a32-92e5-f64c1d481b43\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"361\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-6f594f4c58 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8a0b84792d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c78c22e2-ab59-4729-b900-32fd6946ae53\",\n                \"resourceVersion\": \"159\",\n                \"creationTimestamp\": \"2021-05-25T16:17:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"785\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-9lfbv to ip-172-20-40-186.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8a33b33997\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4973adb2-93ac-4201-8f83-83e64efea363\",\n                \"resourceVersion\": \"206\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"d2d9017947ffc37c01e4aa9a7c5d347ef1787e322438e97aa5093aedd1caf535\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8a492254f8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"40686020-5849-482d-9f0b-31f46169e3c6\",\n                \"resourceVersion\": \"289\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"SandboxChanged\",\n            \"message\": \"Pod sandbox changed, it will be killed and re-created.\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:45Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:56Z\",\n            \"count\": 12,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8a5cd063d2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"520972d1-867b-4973-825e-e91629a62c7e\",\n                \"resourceVersion\": \"222\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"f0975b72c3efd5787c939c28da4b6b084f20d4038e109f442dfa799798152540\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:45Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:45Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8a98b9e901\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"162a94df-b13c-41ba-aecd-a08a2b00ffcc\",\n                \"resourceVersion\": \"229\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"8fb717fc1fce819c66def4306a8f3b1d3d4a32b39d9db16b0b1e16752ce6fec2\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8ad59fde95\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b9cb244a-0146-44b5-87a7-83c9debab6be\",\n                \"resourceVersion\": \"235\",\n                \"creationTimestamp\": \"2021-05-25T16:17:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"d1e656115ee277790ead937b50b77b2f1cf4a8b345b50797bfabd32cbb5cccfd\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:47Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:47Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8b1134e9df\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b6b3efe7-142c-4d57-bfd3-b9ab7fb90189\",\n                \"resourceVersion\": \"241\",\n                \"creationTimestamp\": \"2021-05-25T16:17:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"997b16e085cb6a4c2af6a6c66a12ccc0d3a76a65cc05daa4175510b5b3a603d0\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:48Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:48Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8b4edda121\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9d5994d7-b1b4-4fa4-895d-20cdc6c7861b\",\n                \"resourceVersion\": \"253\",\n                \"creationTimestamp\": \"2021-05-25T16:17:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"3774979cc874f940e934f79b0b5d71cb7d7040da0f7d827fe0e63d876188e3ff\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:49Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:49Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8b8a091c0d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5e97615c-f016-4bc8-88cc-966a5eb10b69\",\n                \"resourceVersion\": \"265\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"ff88cee8d202ac5ea7613f2094cdc496de515b4640957a25ce296dc4440f723f\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:50Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:50Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8bc714d2f0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7451be93-384b-49a0-bf27-879e9e71be3c\",\n                \"resourceVersion\": \"276\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"7cde5d04ed67106bbbf2ce84dad597970568b456b95189c9cfa6b7aa045e40f3\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:51Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:51Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8c03e25eb0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"96eb2373-d13b-4eea-936b-d13af2a546da\",\n                \"resourceVersion\": \"282\",\n                \"creationTimestamp\": \"2021-05-25T16:17:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"58305916b3d06ce09fd69789b97fe2022293fcb30ed3e85195ab16f0d3c987ec\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:52Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:52Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv.16825b8c400ce0e3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c34ce116-b1c7-488e-8e2e-6da608a6966f\",\n                \"resourceVersion\": \"290\",\n                \"creationTimestamp\": \"2021-05-25T16:17:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"788\"\n            },\n            \"reason\": \"FailedCreatePodSandBox\",\n            \"message\": \"(combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"3d18a168fe7f0877e277f87fd0fe69497dcf30383be141cee4fde492954b2e2d\\\" network for pod \\\"coredns-f45c4bf76-9lfbv\\\": networkPlugin cni failed to set up pod \\\"coredns-f45c4bf76-9lfbv_kube-system\\\" network: open /run/flannel/subnet.env: no such file or directory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:53Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:56Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b71883c8161\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3c6fed40-a1f9-4c49-bfd1-28ff04fffbe6\",\n                \"resourceVersion\": \"93\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:19Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b84a79cb560\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5da7bdf2-0a31-4317-9b49-2e69f52208cc\",\n                \"resourceVersion\": \"95\",\n                \"creationTimestamp\": \"2021-05-25T16:17:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"439\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b8713e3036b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3f61458c-2841-46d0-9f47-fbf7e9f58ce0\",\n                \"resourceVersion\": \"102\",\n                \"creationTimestamp\": \"2021-05-25T16:17:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:31Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b89a3c664b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2a0860b1-2b52-4a35-97ba-00df27bab657\",\n                \"resourceVersion\": \"139\",\n                \"creationTimestamp\": \"2021-05-25T16:17:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"699\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-xv9v7 to ip-172-20-48-192.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:42Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b89ccd2121e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d4ecf292-fe8a-47ac-9b7b-f076f48bb9fb\",\n                \"resourceVersion\": \"266\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"775\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b8a8dcf050a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"87ab904b-cf77-4246-ab33-8ee344ea59eb\",\n                \"resourceVersion\": \"268\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"775\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 3.237783311s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b8a94f8b551\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1ec79fbf-202b-4e33-8c22-8cb5af82670d\",\n                \"resourceVersion\": \"270\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"775\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7.16825b8a9d21febb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3bad22b7-4bc4-4728-b908-d67834905809\",\n                \"resourceVersion\": \"273\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"775\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.16825b718551f024\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a4e43186-eca6-4609-9154-6282f62cb9b3\",\n                \"resourceVersion\": \"67\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"138b85f1-3021-4e0d-8a7d-d442e841cb7d\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"417\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-xv9v7\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.16825b8a0ad197b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eeda986c-cdb4-46a4-b085-3f3745e6b85d\",\n                \"resourceVersion\": \"158\",\n                \"creationTimestamp\": \"2021-05-25T16:17:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"138b85f1-3021-4e0d-8a7d-d442e841cb7d\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"784\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-9lfbv\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16825b71826bc24e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"824fc903-727f-4d97-be90-20f747d4bc00\",\n                \"resourceVersion\": \"63\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"9c1dabbb-17a0-40c8-a70e-68d10a11200b\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"352\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16825b8a0a6e2685\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"86cff0ca-141c-4577-896b-6756eb5928ca\",\n                \"resourceVersion\": \"157\",\n                \"creationTimestamp\": \"2021-05-25T16:17:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"9c1dabbb-17a0-40c8-a70e-68d10a11200b\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"783\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78-4c8mj.16825b7184b6a26f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ac620cd3-36b9-439e-9a16-2932d7ceb472\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-54665f7b78-4c8mj\",\n                \"uid\": \"c0ea348c-a62f-4909-ba33-8d48e4fe51ef\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"422\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-54665f7b78-4c8mj to ip-172-20-44-17.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78-4c8mj.16825b71a42c64b9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d5ad204b-992e-4dc5-8ef2-83e756e3969b\",\n                \"resourceVersion\": \"71\",\n                \"creationTimestamp\": \"2021-05-25T16:15:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-54665f7b78-4c8mj\",\n                \"uid\": \"c0ea348c-a62f-4909-ba33-8d48e4fe51ef\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"425\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78-4c8mj.16825b71a61224e5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c5d6ed52-bf12-40d2-ace7-9094d0edc001\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-05-25T16:15:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-54665f7b78-4c8mj\",\n                \"uid\": \"c0ea348c-a62f-4909-ba33-8d48e4fe51ef\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"425\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78-4c8mj.16825b71ac2a3890\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c2b0c5a4-8c51-4275-8ef3-f2b13e3a899a\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2021-05-25T16:15:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-54665f7b78-4c8mj\",\n                \"uid\": \"c0ea348c-a62f-4909-ba33-8d48e4fe51ef\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"425\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78.16825b7184ed9b11\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0f430a51-0f86-4a1e-b6bd-7b9d0aba5a6f\",\n                \"resourceVersion\": \"66\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-54665f7b78\",\n                \"uid\": \"67104e94-d746-41de-aafc-7cd19d6ae218\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"414\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-54665f7b78-4c8mj\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16825b7181c8db73\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5b2eaf2b-583a-4387-a837-27059704167c\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"fca60403-da67-4f35-afe2-16639d60f7c0\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"307\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-54665f7b78 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal.16825b6503939c9f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8db60483-e591-433b-bde8-8de27c5bf143\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-05-25T16:15:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"fa432a74851351548fcf6f961a301653\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal.16825b6701d8a4de\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e98e45e7-dcc1-46d2-87ad-62f138ba155c\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-05-25T16:15:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"fa432a74851351548fcf6f961a301653\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 8.560893085s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:13Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal.16825b67589c8fac\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f63e94f4-6d48-4e3c-a726-2597a3d573ed\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-05-25T16:15:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"fa432a74851351548fcf6f961a301653\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal.16825b67619d0ea1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4f892f63-3bd7-47df-b59a-6a120d6f069e\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-05-25T16:15:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"fa432a74851351548fcf6f961a301653\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal.16825b650e2e197f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6490c585-a079-4cc5-b82a-604b4718089c\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-05-25T16:15:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"7065c93ba4ef5cca56900f1e582e5a4a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal.16825b671c74eef2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ddf736c8-83df-4e68-ab07-2ea0d9388f75\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-05-25T16:15:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"7065c93ba4ef5cca56900f1e582e5a4a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 8.829446853s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:14Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal.16825b6758976914\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ad0301a0-0d2a-4c05-89b2-5a9d1e0d889a\",\n                \"resourceVersion\": \"49\",\n                \"creationTimestamp\": \"2021-05-25T16:15:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"7065c93ba4ef5cca56900f1e582e5a4a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal.16825b67651b71f3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9cd4e927-cdb6-4b34-a18e-35e1610cbd33\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"7065c93ba4ef5cca56900f1e582e5a4a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-l89sm.16825b745ce6f11a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"837a0891-2933-4d57-bf8e-49d7439db065\",\n                \"resourceVersion\": \"87\",\n                \"creationTimestamp\": \"2021-05-25T16:16:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-l89sm\",\n                \"uid\": \"43eb1337-e146-4448-a96b-d7c09830ab69\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"476\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-l89sm to ip-172-20-44-17.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-l89sm.16825b747ad254bc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9677b5e0-66b2-4daf-8946-333c185e461e\",\n                \"resourceVersion\": \"88\",\n                \"creationTimestamp\": \"2021-05-25T16:16:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-l89sm\",\n                \"uid\": \"43eb1337-e146-4448-a96b-d7c09830ab69\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"478\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.21.0-beta.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-l89sm.16825b747d1b1d40\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d39b6961-3a83-404b-99b0-6dd0e9c31544\",\n                \"resourceVersion\": \"89\",\n                \"creationTimestamp\": \"2021-05-25T16:16:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-l89sm\",\n                \"uid\": \"43eb1337-e146-4448-a96b-d7c09830ab69\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"478\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-l89sm.16825b7482ce4e61\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"60773b75-277c-4ae7-9f39-344618fa8e0c\",\n                \"resourceVersion\": \"90\",\n                \"creationTimestamp\": \"2021-05-25T16:16:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-l89sm\",\n                \"uid\": \"43eb1337-e146-4448-a96b-d7c09830ab69\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"478\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16825b74c543204d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2ea9c361-4fe0-48d7-9e74-610c005ee961\",\n                \"resourceVersion\": \"91\",\n                \"creationTimestamp\": \"2021-05-25T16:16:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"0ef8530e-07f2-4a3d-a490-38c560dc8f2c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"485\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-44-17_a96c5c49-c3b7-49fc-a530-756d11a25437 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-44-17_a96c5c49-c3b7-49fc-a530-756d11a25437\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:12Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16825b745bdca7b5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b4a04f6b-e897-45b8-903e-ee932f765a84\",\n                \"resourceVersion\": \"86\",\n                \"creationTimestamp\": \"2021-05-25T16:16:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"728c8cb5-28a5-4b43-85df-450f211f716c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"411\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-l89sm\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal.16825b650d03bafd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0de9bd45-7b7c-4a16-af4e-bd802497726e\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-05-25T16:15:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"5a0beb9695c07cad160508d741ccebc2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:28Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal.16825b6513573d9a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"81448d78-0867-4781-ac45-544fe73816df\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-05-25T16:15:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"5a0beb9695c07cad160508d741ccebc2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:28Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal.16825b65308da110\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d739ac44-f499-4663-a03d-3e599366018b\",\n                \"resourceVersion\": \"55\",\n                \"creationTimestamp\": \"2021-05-25T16:15:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"5a0beb9695c07cad160508d741ccebc2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:28Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal.16825b653106dc9f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e0f1d4f2-cbff-460e-8661-4e6448f0e888\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-05-25T16:15:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"5a0beb9695c07cad160508d741ccebc2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal.16825b653fc68074\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c05186d-393b-41f2-b4eb-360a3c7e35ff\",\n                \"resourceVersion\": \"45\",\n                \"creationTimestamp\": \"2021-05-25T16:15:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"5a0beb9695c07cad160508d741ccebc2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:06Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal.16825b65794b5f6a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"148c5b48-31e7-4616-bb46-186b05533d4b\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-05-25T16:15:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"5a0beb9695c07cad160508d741ccebc2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:07Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal.16825b64fd4cc42a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"73f97c33-6ba0-4d52-9242-dc3e7f45fc5d\",\n                \"resourceVersion\": \"30\",\n                \"creationTimestamp\": \"2021-05-25T16:15:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"0ba0d57c7f44bb779ad73d1fee6aa348\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal.16825b65014ba017\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f4ad193d-27e0-4fdd-bd2a-fb3bdac213ed\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-05-25T16:15:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"0ba0d57c7f44bb779ad73d1fee6aa348\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal.16825b650fdac72d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2eae9b7a-c35f-4ebe-b341-cc3c5b207e85\",\n                \"resourceVersion\": \"39\",\n                \"creationTimestamp\": \"2021-05-25T16:15:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"0ba0d57c7f44bb779ad73d1fee6aa348\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16825b6e34c77ca6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f6ec93a5-e93d-491d-a4a0-70d4eccb349c\",\n                \"resourceVersion\": \"3\",\n                \"creationTimestamp\": \"2021-05-25T16:15:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"55d72e35-6566-4d38-843a-c7635c1d9ae1\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"214\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-44-17_144c2210-0653-4d89-a4d8-68fe313596f2 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:44Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.16825b717aa67801\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bdd81fa7-5695-41ef-a9c9-616e21cbf66f\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"642cefac-94c6-4bcf-9d0b-82e4f3b2cde0\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"355\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b717f768abb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"46592dcc-51eb-4fbf-9f44-fdf7843956b2\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"412\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-6z7wj to ip-172-20-44-17.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b71a077f7cd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e481cab4-ea05-4eed-b79b-b4aa2f7e55f4\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2021-05-25T16:15:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b72ce6090cc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7da1b8e5-c9a3-4d49-8a0b-60bbebdc9421\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-05-25T16:16:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.065173863s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:04Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b72d64b43c1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e0fab171-d58e-4770-86b4-088499fd8bc9\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-05-25T16:16:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:04Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b72de0aee45\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"54bab7f2-d7c1-432d-b3aa-c996d586e91c\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-05-25T16:16:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:04Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b731a7b51a1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"353136d8-5a40-4d95-96d9-85f88ccc9fbe\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-05-25T16:16:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b731db86d3b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4c518fde-47c0-43b4-b115-195c067fb504\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2021-05-25T16:16:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj.16825b732537714f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6852f998-d6c7-4de9-8710-47fdf648d46c\",\n                \"resourceVersion\": \"81\",\n                \"creationTimestamp\": \"2021-05-25T16:16:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"413\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b8732807fa7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9cfd5f61-1a7f-4bae-9a3a-894f9451c752\",\n                \"resourceVersion\": \"104\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-78qfq to ip-172-20-48-192.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b8763354761\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"70261eba-c7b8-441e-a035-0cd967cce4b9\",\n                \"resourceVersion\": \"249\",\n                \"creationTimestamp\": \"2021-05-25T16:17:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b889b123811\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fc12b9b1-651a-4104-bdb9-3a7f1305119a\",\n                \"resourceVersion\": \"251\",\n                \"creationTimestamp\": \"2021-05-25T16:17:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.232174514s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b88a3d0a2f5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"519f4ce5-d407-4b29-8705-a86023bc2bb5\",\n                \"resourceVersion\": \"254\",\n                \"creationTimestamp\": \"2021-05-25T16:17:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b88ac40c230\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a819978e-af69-48fe-9477-b2ddb66ef833\",\n                \"resourceVersion\": \"256\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b88bacc08a4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"17831315-eaa9-4046-8f10-ef23d470abf0\",\n                \"resourceVersion\": \"258\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b88bcf0af63\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"00828f26-be1c-402f-9c82-1053e23fe6c0\",\n                \"resourceVersion\": \"261\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq.16825b88c2a0566e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b4d53ffa-ad91-4958-8b24-29da2237f7c1\",\n                \"resourceVersion\": \"263\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"706\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b84a8df778d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0cc49629-4401-4467-a241-a9d3bfac7353\",\n                \"resourceVersion\": \"97\",\n                \"creationTimestamp\": \"2021-05-25T16:17:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"651\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-dfzmv to ip-172-20-40-186.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b84f380653b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c8b5ab74-9bac-4b5b-81a2-3bb43b2015dd\",\n                \"resourceVersion\": \"173\",\n                \"creationTimestamp\": \"2021-05-25T16:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:22Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b862671d656\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d1ccd701-35e1-4443-be79-589089a9a614\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2021-05-25T16:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.149634144s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:27Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b862d96b87c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7277bcd0-c7dd-4063-8874-722d6764eef4\",\n                \"resourceVersion\": \"188\",\n                \"creationTimestamp\": \"2021-05-25T16:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:27Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b8636002b85\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"39e832ab-4c73-498b-9853-d1bcbeb52ded\",\n                \"resourceVersion\": \"191\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:27Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b864fa2bd87\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"31ed1dd4-9b35-4ffe-b513-a36d7ab1d41b\",\n                \"resourceVersion\": \"291\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:28Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:58Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b86519644be\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d2666cb7-c7a0-4605-9c55-181b4fc8d8ba\",\n                \"resourceVersion\": \"292\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:28Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:58Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv.16825b865737dcf2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7297b9cc-f973-4ffd-b384-1e3c89153905\",\n                \"resourceVersion\": \"293\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:28Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:58Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b86eda33cb4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e547fabb-d9b1-4ed6-b7e5-4bbd94aff132\",\n                \"resourceVersion\": \"100\",\n                \"creationTimestamp\": \"2021-05-25T16:17:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"687\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-r8x62 to ip-172-20-60-66.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:30Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b87180ed851\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"251ccdbb-eb3c-4dd8-8677-9b0e1d7032fc\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-05-25T16:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:31Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b8858a9c1e1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"be860a1a-070d-4ef5-82f7-c381060642d4\",\n                \"resourceVersion\": \"187\",\n                \"creationTimestamp\": \"2021-05-25T16:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.378846448s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:36Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b8861981c97\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95ededc3-6a2e-4140-b9f2-405060dbc03e\",\n                \"resourceVersion\": \"190\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b886a63e738\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eacb9fb0-ac44-46ef-b4b2-80bd91d30164\",\n                \"resourceVersion\": \"193\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b8873de4a85\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6d35569c-824e-4dde-a840-57c0c8ca705f\",\n                \"resourceVersion\": \"196\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b887701b828\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"83dc694a-48c0-4f06-9bad-6f3a462df7a3\",\n                \"resourceVersion\": \"199\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62.16825b887cbd2fd5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9301184d-9d2f-4256-8fe5-e06ce38fdebf\",\n                \"resourceVersion\": \"202\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"689\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b873e6df2cc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"136256b2-73b7-40d7-b715-e2e315d75a59\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"714\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-flannel-ds-vk4m7 to ip-172-20-54-92.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b87676ec864\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6b89fbe4-a4b4-483c-8f4b-de77dba99601\",\n                \"resourceVersion\": \"271\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/coreos/flannel:v0.13.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b889a41db21\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"24a5b53a-9b5b-4fca-8c2e-a13de2e24fe2\",\n                \"resourceVersion\": \"274\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/coreos/flannel:v0.13.0\\\" in 5.14765009s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b88a260f4f2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1cfe056a-0d8f-4af3-8f49-b7438220adb7\",\n                \"resourceVersion\": \"275\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b88ac586387\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"21fd9d84-7a90-4181-bbe2-243a9eb93b5a\",\n                \"resourceVersion\": \"277\",\n                \"creationTimestamp\": \"2021-05-25T16:17:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.initContainers{install-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container install-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b88c2b06cc2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b29be691-83dd-4a5c-91a8-83b5b2ba4d67\",\n                \"resourceVersion\": \"278\",\n                \"creationTimestamp\": \"2021-05-25T16:17:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/coreos/flannel:v0.13.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b88c5100fcb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"502490d8-df3c-4ac3-a052-e5ac6cfb8d58\",\n                \"resourceVersion\": \"279\",\n                \"creationTimestamp\": \"2021-05-25T16:17:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7.16825b88caee5f31\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0c0570d7-b7ba-417e-8126-a81d5606bb69\",\n                \"resourceVersion\": \"281\",\n                \"creationTimestamp\": \"2021-05-25T16:17:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"715\",\n                \"fieldPath\": \"spec.containers{kube-flannel}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-flannel\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16825b717f641423\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c1a87c7a-3a9a-47e2-a84f-bb98ac0bfe84\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"328\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-6z7wj\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16825b84a87b84b1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"17c26419-7643-4667-8c23-ac4656661faf\",\n                \"resourceVersion\": \"96\",\n                \"creationTimestamp\": \"2021-05-25T16:17:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"464\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-dfzmv\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16825b86ecfaa9f7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c102edfd-3a03-4043-a16a-0913ae062f80\",\n                \"resourceVersion\": \"99\",\n                \"creationTimestamp\": \"2021-05-25T16:17:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"680\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-r8x62\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:30Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16825b8731bdca16\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6fb61189-3d87-49dd-be89-0d9620e3aab3\",\n                \"resourceVersion\": \"103\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"690\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-78qfq\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds.16825b873d4588d3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1e88b86f-2827-46c5-b7d4-07d61c041cf7\",\n                \"resourceVersion\": \"105\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-flannel-ds\",\n                \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"707\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-flannel-ds-vk4m7\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:17:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal.16825b7868aa8812\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f23a760b-903f-4b2f-9a60-077e4957d42e\",\n                \"resourceVersion\": \"146\",\n                \"creationTimestamp\": \"2021-05-25T16:17:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"uid\": \"dd2bb0eda31c35b976b408cc90aff612\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:28Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal.16825b786af23565\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"72258c8b-ed75-43b0-b6a8-455030a26069\",\n                \"resourceVersion\": \"148\",\n                \"creationTimestamp\": \"2021-05-25T16:17:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"uid\": \"dd2bb0eda31c35b976b408cc90aff612\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:28Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal.16825b7872a060b5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bc47b11c-42cb-46c5-a86a-2affd4c3dd88\",\n                \"resourceVersion\": \"150\",\n                \"creationTimestamp\": \"2021-05-25T16:17:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"uid\": \"dd2bb0eda31c35b976b408cc90aff612\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-40-186.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:28Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal.16825b6509a33229\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f2f4c404-16b8-49b7-b11d-c46a151d826b\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-05-25T16:15:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"466af20b2a4057e021405a51dab7d35b\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal.16825b650f7240c2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dc63149b-f3d8-4c3b-9349-1bf25ba3d086\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-05-25T16:15:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"466af20b2a4057e021405a51dab7d35b\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal.16825b65279a8e7c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"072e3492-9902-4f0e-bc18-a37a1e0d121f\",\n                \"resourceVersion\": \"42\",\n                \"creationTimestamp\": \"2021-05-25T16:15:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"466af20b2a4057e021405a51dab7d35b\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal.16825b796f17c03e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f61759a3-9852-4636-ba50-25f3c1dca5b7\",\n                \"resourceVersion\": \"198\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"uid\": \"19370eb69d07a880de62846bdce6e20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal.16825b7971d049b1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f4404086-f96b-40e2-a305-a5b240c89172\",\n                \"resourceVersion\": \"201\",\n                \"creationTimestamp\": \"2021-05-25T16:17:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"uid\": \"19370eb69d07a880de62846bdce6e20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:32Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal.16825b797916c948\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f117c963-ea87-418e-a667-fdecc47d97b8\",\n                \"resourceVersion\": \"204\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"uid\": \"19370eb69d07a880de62846bdce6e20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-48-192.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal.16825b79783c9c17\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"35324138-dfe3-44b6-a49f-4489d2d0f622\",\n                \"resourceVersion\": \"237\",\n                \"creationTimestamp\": \"2021-05-25T16:17:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"uid\": \"65bce40e337fc2b2bd50709aab921732\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal.16825b797b0908d6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"501cae10-e01b-4abc-8c81-908c1ee133a0\",\n                \"resourceVersion\": \"240\",\n                \"creationTimestamp\": \"2021-05-25T16:17:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"uid\": \"65bce40e337fc2b2bd50709aab921732\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal.16825b7981af2615\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f07bfd10-3a9e-465c-b6ef-ed63523a20be\",\n                \"resourceVersion\": \"243\",\n                \"creationTimestamp\": \"2021-05-25T16:17:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"uid\": \"65bce40e337fc2b2bd50709aab921732\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-92.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal.16825b792ab64915\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c84de01f-b180-4fdf-83fa-9dd11179c8a1\",\n                \"resourceVersion\": \"143\",\n                \"creationTimestamp\": \"2021-05-25T16:17:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"uid\": \"b32ae3006edac41f054970ca45c6514e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:31Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal.16825b792d940045\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6b2dd402-3d13-491c-99ec-78371ac58bec\",\n                \"resourceVersion\": \"145\",\n                \"creationTimestamp\": \"2021-05-25T16:17:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"uid\": \"b32ae3006edac41f054970ca45c6514e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:31Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal.16825b7934225bbf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b25b3c0d-926e-4319-bf2a-0cdcca9e35d8\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2021-05-25T16:17:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"uid\": \"b32ae3006edac41f054970ca45c6514e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-66.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:16:31Z\",\n            \"lastTimestamp\": \"2021-05-25T16:16:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal.16825b6507cf1862\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41e88fe3-a437-4f2b-a545-6930f1b436ea\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-05-25T16:15:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"e41a67974f6151102605d2ffb0e98fe0\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal.16825b650c4bcd90\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a7bd299a-e77a-47dd-b341-be008701c146\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-05-25T16:15:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"e41a67974f6151102605d2ffb0e98fe0\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal.16825b6520c993c2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ea29b15f-6dee-43fe-a740-a6c02e1baf00\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-05-25T16:15:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"uid\": \"e41a67974f6151102605d2ffb0e98fe0\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-44-17.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16825b6ed8512487\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"83824d85-50d4-40e3-b557-11a847afcb67\",\n                \"resourceVersion\": \"4\",\n                \"creationTimestamp\": \"2021-05-25T16:15:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"40020720-4b9e-4075-b1cd-3ded4b7d8940\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"283\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-44-17_ba210a5e-6fc2-4b48-a8d8-5b7eca2ff2ff became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-25T16:15:47Z\",\n            \"lastTimestamp\": \"2021-05-25T16:15:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2296\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2302\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f41aa12b-b0ce-4c96-b13b-fb2b42f013ca\",\n                \"resourceVersion\": \"354\",\n                \"creationTimestamp\": \"2021-05-25T16:15:51Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2303\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"728c8cb5-28a5-4b43-85df-450f211f716c\",\n                \"resourceVersion\": \"483\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-25T16:15:49Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.21.0-beta.2\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.21.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.21.0-beta.2\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.2\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.2\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.21.0-beta.2\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.21.0-beta.2\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.2\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                \"resourceVersion\": \"855\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-25T16:15:49Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.flannel\",\n                    \"addon.kops.k8s.io/version\": \"0.13.0-kops.1\",\n                    \"app\": \"flannel\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-app\": \"flannel\",\n                    \"role.kubernetes.io/networking\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.flannel\\\",\\\"addon.kops.k8s.io/version\\\":\\\"0.13.0-kops.1\\\",\\\"app\\\":\\\"flannel\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-app\\\":\\\"flannel\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-flannel-ds\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"flannel\\\",\\\"tier\\\":\\\"node\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"flannel\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"nodeAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":{\\\"nodeSelectorTerms\\\":[{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"kubernetes.io/os\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"linux\\\"]}]}]}}},\\\"containers\\\":[{\\\"args\\\":[\\\"--ip-masq\\\",\\\"--kube-subnet-mgr\\\",\\\"--iptables-resync=5\\\"],\\\"command\\\":[\\\"/opt/bin/flanneld\\\"],\\\"env\\\":[{\\\"name\\\":\\\"POD_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.name\\\"}}},{\\\"name\\\":\\\"POD_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}}],\\\"image\\\":\\\"quay.io/coreos/flannel:v0.13.0\\\",\\\"name\\\":\\\"kube-flannel\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"100Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"100Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\",\\\"NET_RAW\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/flannel\\\",\\\"name\\\":\\\"run\\\"},{\\\"mountPath\\\":\\\"/dev/net\\\",\\\"name\\\":\\\"dev-net\\\"},{\\\"mountPath\\\":\\\"/etc/kube-flannel/\\\",\\\"name\\\":\\\"flannel-cfg\\\"}]}],\\\"hostNetwork\\\":true,\\\"initContainers\\\":[{\\\"args\\\":[\\\"-f\\\",\\\"/etc/kube-flannel/cni-conf.json\\\",\\\"/etc/cni/net.d/10-flannel.conflist\\\"],\\\"command\\\":[\\\"cp\\\"],\\\"image\\\":\\\"quay.io/coreos/flannel:v0.13.0\\\",\\\"name\\\":\\\"install-cni\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni\\\"},{\\\"mountPath\\\":\\\"/etc/kube-flannel/\\\",\\\"name\\\":\\\"flannel-cfg\\\"}]}],\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccountName\\\":\\\"flannel\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/flannel\\\"},\\\"name\\\":\\\"run\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/dev/net\\\"},\\\"name\\\":\\\"dev-net\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\"},\\\"name\\\":\\\"cni\\\"},{\\\"configMap\\\":{\\\"name\\\":\\\"kube-flannel-cfg\\\"},\\\"name\\\":\\\"flannel-cfg\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"flannel\",\n                        \"tier\": \"node\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"flannel\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"run\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/flannel\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"hostPath\": {\n                                    \"path\": \"/dev/net\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"cni\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"configMap\": {\n                                    \"name\": \"kube-flannel-cfg\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"initContainers\": [\n                            {\n                                \"name\": \"install-cni\",\n                                \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                                \"command\": [\n                                    \"cp\"\n                                ],\n                                \"args\": [\n                                    \"-f\",\n                                    \"/etc/kube-flannel/cni-conf.json\",\n                                    \"/etc/cni/net.d/10-flannel.conflist\"\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"flannel-cfg\",\n                                        \"mountPath\": \"/etc/kube-flannel/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-flannel\",\n                                \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                                \"command\": [\n                                    \"/opt/bin/flanneld\"\n                                ],\n                                \"args\": [\n                                    \"--ip-masq\",\n                                    \"--kube-subnet-mgr\",\n                                    \"--iptables-resync=5\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"POD_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.name\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"100Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"100Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"run\",\n                                        \"mountPath\": \"/run/flannel\"\n                                    },\n                                    {\n                                        \"name\": \"dev-net\",\n                                        \"mountPath\": \"/dev/net\"\n                                    },\n                                    {\n                                        \"name\": \"flannel-cfg\",\n                                        \"mountPath\": \"/etc/kube-flannel/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\",\n                                            \"NET_RAW\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"flannel\",\n                        \"serviceAccount\": \"flannel\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"nodeAffinity\": {\n                                \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                                    \"nodeSelectorTerms\": [\n                                        {\n                                            \"matchExpressions\": [\n                                                {\n                                                    \"key\": \"kubernetes.io/os\",\n                                                    \"operator\": \"In\",\n                                                    \"values\": [\n                                                        \"linux\"\n                                                    ]\n                                                }\n                                            ]\n                                        }\n                                    ]\n                                }\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 0\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 5,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 5,\n                \"numberReady\": 5,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 5,\n                \"numberAvailable\": 5\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2306\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9c1dabbb-17a0-40c8-a70e-68d10a11200b\",\n                \"resourceVersion\": \"886\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-05-25T16:15:51Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"coredns/coredns:1.8.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"beta.kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"coredns/coredns:1.8.3\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-25T16:17:48Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:48Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-25T16:18:04Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:58Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-f45c4bf76\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8f8b575d-545d-4a32-92e5-f64c1d481b43\",\n                \"resourceVersion\": \"799\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-25T16:15:51Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-25T16:17:44Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:17:44Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-25T16:17:44Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:58Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-6f594f4c58\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fca60403-da67-4f35-afe2-16639d60f7c0\",\n                \"resourceVersion\": \"453\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-25T16:15:48Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.21.0-beta.2\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.21.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.21.0-beta.2\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.2\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.21.0-beta.2\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.21.0-beta.2\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-25T16:15:59Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:59Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-25T16:15:59Z\",\n                        \"lastTransitionTime\": \"2021-05-25T16:15:58Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-54665f7b78\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2312\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a3792740-f566-46df-a970-cf5e57d9bd6b\",\n                \"resourceVersion\": \"798\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"6f594f4c58\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"8f8b575d-545d-4a32-92e5-f64c1d481b43\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"6f594f4c58\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"6f594f4c58\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"138b85f1-3021-4e0d-8a7d-d442e841cb7d\",\n                \"resourceVersion\": \"883\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"9c1dabbb-17a0-40c8-a70e-68d10a11200b\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"f45c4bf76\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"f45c4bf76\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"coredns/coredns:1.8.3\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"67104e94-d746-41de-aafc-7cd19d6ae218\",\n                \"resourceVersion\": \"452\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"54665f7b78\",\n                    \"version\": \"v1.21.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"fca60403-da67-4f35-afe2-16639d60f7c0\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"54665f7b78\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"54665f7b78\",\n                            \"version\": \"v1.21.0-beta.2\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2317\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-llsrv\",\n                \"generateName\": \"coredns-autoscaler-6f594f4c58-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"241c15dc-5c2a-4291-8758-65f0fa836688\",\n                \"resourceVersion\": \"797\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"6f594f4c58\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-6f594f4c58\",\n                        \"uid\": \"a3792740-f566-46df-a970-cf5e57d9bd6b\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-85q8h\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-85q8h\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:41Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:44Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:44Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:41Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.60.66\",\n                \"podIP\": \"100.96.2.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.2.2\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:41Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:17:43Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                        \"containerID\": \"docker://72fa7fdcc1e55cb6ba53066ee6f9d33a79397de7fe49138e2c4c5db80c693a35\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-9lfbv\",\n                \"generateName\": \"coredns-f45c4bf76-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a5ec6187-dded-403b-a091-473803043a11\",\n                \"resourceVersion\": \"880\",\n                \"creationTimestamp\": \"2021-05-25T16:17:44Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-f45c4bf76\",\n                        \"uid\": \"138b85f1-3021-4e0d-8a7d-d442e841cb7d\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-85799\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-85799\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:44Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:18:04Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:18:04Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:44Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.40.186\",\n                \"podIP\": \"100.96.1.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.2\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:44Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:18:03Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"imageID\": \"docker-pullable://coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                        \"containerID\": \"docker://d8c7ef18b6916d8a2ae7aad76faa6811175b8c945072aba7169f51b0f34f47a1\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-xv9v7\",\n                \"generateName\": \"coredns-f45c4bf76-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ad454ba2-7604-48f3-a92c-46c6e6468fba\",\n                \"resourceVersion\": \"813\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-f45c4bf76\",\n                        \"uid\": \"138b85f1-3021-4e0d-8a7d-d442e841cb7d\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-vm6hh\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-vm6hh\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:42Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:48Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:48Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:42Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.48.192\",\n                \"podIP\": \"100.96.3.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.3.2\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:42Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:17:46Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"imageID\": \"docker-pullable://coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                        \"containerID\": \"docker://1dd8904e27a33b56fdb8b49f31868e710fa887fbbc2bdbb9722987d50811084d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-54665f7b78-4c8mj\",\n                \"generateName\": \"dns-controller-54665f7b78-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c0ea348c-a62f-4909-ba33-8d48e4fe51ef\",\n                \"resourceVersion\": \"451\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"54665f7b78\",\n                    \"version\": \"v1.21.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-54665f7b78\",\n                        \"uid\": \"67104e94-d746-41de-aafc-7cd19d6ae218\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-m2ctt\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-m2ctt\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:58Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:59Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:59Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:58Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:15:58Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:59Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0-beta.2\",\n                        \"imageID\": \"docker://sha256:c2e3a9c443416db693aefd33e5c8bbc5533ffa0ca3b19ebb93b93edc81787c77\",\n                        \"containerID\": \"docker://1372bf80bc9606e999b2c274de9e6dff8899f8176e3031cb06e0cec635533088\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cc1f4b3d-1ba8-406f-8157-1f9ca6b772f4\",\n                \"resourceVersion\": \"554\",\n                \"creationTimestamp\": \"2021-05-25T16:16:35Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"fa432a74851351548fcf6f961a301653\",\n                    \"kubernetes.io/config.mirror\": \"fa432a74851351548fcf6f961a301653\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:14:49.696728768Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                        \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3997 --insecure=false --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:14:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:15Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"docker://f30facfdf3d890c89f22b0835fa9fa960f2a58ae2994a32de84360ddd28a86bc\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0ffaed2e-1e8b-4115-b8a3-224005dce277\",\n                \"resourceVersion\": \"578\",\n                \"creationTimestamp\": \"2021-05-25T16:16:39Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"7065c93ba4ef5cca56900f1e582e5a4a\",\n                    \"kubernetes.io/config.mirror\": \"7065c93ba4ef5cca56900f1e582e5a4a\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:14:49.696730196Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                        \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3996 --insecure=false --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:16Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:16Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:14:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:15Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"docker://64496c835c523fc00c5f9a914af2db43776e51eabd5fd25fc55a5f3065bccc1b\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-l89sm\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"43eb1337-e146-4448-a96b-d7c09830ab69\",\n                \"resourceVersion\": \"482\",\n                \"creationTimestamp\": \"2021-05-25T16:16:11Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"55455f8788\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.21.0-beta.2\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"728c8cb5-28a5-4b43-85df-450f211f716c\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-9zlf2\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.2\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-9zlf2\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-44-17.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:11Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:11Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:16:11Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:16:11Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0-beta.2\",\n                        \"imageID\": \"docker://sha256:2c318213757ae0f89657b98f963d66bb4e4192ebf7977fd63e3a3dfc2d2db9b3\",\n                        \"containerID\": \"docker://2289755480ab07b8206de5b8d10314b1c5da701c60163154afe368c46663fab2\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"03feca8b-e6a3-4102-b296-01344f6d596d\",\n                \"resourceVersion\": \"579\",\n                \"creationTimestamp\": \"2021-05-25T16:16:46Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                    \"kubernetes.io/config.hash\": \"5a0beb9695c07cad160508d741ccebc2\",\n                    \"kubernetes.io/config.mirror\": \"5a0beb9695c07cad160508d741ccebc2\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:14:49.696731167Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                        \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/kube-apiserver\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.2\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:29Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:29Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:14:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:07Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0-beta.2\",\n                        \"imageID\": \"docker://sha256:4c840a264e422ab3e08f6bfc5a50430cc7861159b9f8af3beda9b002604c71e3\",\n                        \"containerID\": \"docker://2204b848b2c016cf2386b2ebaa30e20dbfceed10381503e67544c8f7d96b8db6\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:28Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-05-25T16:15:05Z\",\n                                \"finishedAt\": \"2021-05-25T16:15:27Z\",\n                                \"containerID\": \"docker://ede47a632e99222415e5617b4e37a2c1fa672cc85e23f025aa4c106226f2ee46\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb\",\n                        \"containerID\": \"docker://6145e396f6f966f608665f96ebcc523ac1078bbd8926660db757567a94e557f1\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"51e0e8c2-1db8-4c27-8c57-679b1fff16f2\",\n                \"resourceVersion\": \"505\",\n                \"creationTimestamp\": \"2021-05-25T16:16:09Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"0ba0d57c7f44bb779ad73d1fee6aa348\",\n                    \"kubernetes.io/config.mirror\": \"0ba0d57c7f44bb779ad73d1fee6aa348\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:14:49.696732198Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                        \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/ca.key\",\n                            \"--configure-cloud-routes=false\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/service-account.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10252,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:07Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:07Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:14:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:e16544fd47b02fea6201a1c39f0ffae170968b6dd48ac2643c4db3cab0011ed4\",\n                        \"containerID\": \"docker://4130ec4cd66736d8ea2d60cc617448e023e1476706ebcf72d51457da705dfe0e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-6z7wj\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"287d5b82-81d1-433f-8a88-a0d61c004e64\",\n                \"resourceVersion\": \"463\",\n                \"creationTimestamp\": \"2021-05-25T16:15:58Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-tp4dh\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-tp4dh\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-tp4dh\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-44-17.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:05Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:58Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:15:58Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-25T16:16:04Z\",\n                                \"finishedAt\": \"2021-05-25T16:16:04Z\",\n                                \"containerID\": \"docker://63b0b4b1c6a04eaf5fcf7fa42acdbe7c0f66440307bb0c2a7942c211a0abc2af\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://63b0b4b1c6a04eaf5fcf7fa42acdbe7c0f66440307bb0c2a7942c211a0abc2af\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:16:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://4dcd9dd774884284998acf5f7cb709431adfef855d2ba0e07cea9ca69137fa76\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-78qfq\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fbab5e5e-8494-472f-b51b-dfc82ed73e01\",\n                \"resourceVersion\": \"749\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-bmsgh\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-bmsgh\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-bmsgh\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-48-192.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:38Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:39Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:39Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:32Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.48.192\",\n                \"podIP\": \"172.20.48.192\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.48.192\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:32Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-25T16:17:38Z\",\n                                \"finishedAt\": \"2021-05-25T16:17:38Z\",\n                                \"containerID\": \"docker://bccff497aa3f225c1dfbe7777a64097b4b134be76d22f0e17a300777840ed540\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://bccff497aa3f225c1dfbe7777a64097b4b134be76d22f0e17a300777840ed540\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:17:38Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://eaccda13552f2d9767e690539c22c10d5f65737a859cf10877b5e5c2a2e5f347\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-dfzmv\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7d82a0c3-754d-433f-a424-52dd13510a09\",\n                \"resourceVersion\": \"854\",\n                \"creationTimestamp\": \"2021-05-25T16:17:21Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-2hnlf\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2hnlf\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2hnlf\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-40-186.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:28Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:59Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:59Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:21Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.40.186\",\n                \"podIP\": \"172.20.40.186\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.40.186\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:21Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-25T16:17:27Z\",\n                                \"finishedAt\": \"2021-05-25T16:17:27Z\",\n                                \"containerID\": \"docker://09aee9f44a4971d90a085eeca559ec41faa149d09cd4fbe3d19c695eb9b6d6b8\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://09aee9f44a4971d90a085eeca559ec41faa149d09cd4fbe3d19c695eb9b6d6b8\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:17:58Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-05-25T16:17:28Z\",\n                                \"finishedAt\": \"2021-05-25T16:17:58Z\",\n                                \"containerID\": \"docker://64d142dbd0f719d8aee9485dd512959eb8ad5d6083abd64e222d9736078381a8\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://0d921f5c3379b5576e1c247148fee0681e91da334c1c49c704edc3f986b1f0f0\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-r8x62\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"90ba7980-9e0a-4c8e-9054-82da3c90963f\",\n                \"resourceVersion\": \"740\",\n                \"creationTimestamp\": \"2021-05-25T16:17:30Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-rf5gl\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-rf5gl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-rf5gl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-60-66.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:37Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:38Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:38Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:30Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.60.66\",\n                \"podIP\": \"172.20.60.66\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.60.66\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:31Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-25T16:17:37Z\",\n                                \"finishedAt\": \"2021-05-25T16:17:37Z\",\n                                \"containerID\": \"docker://ae729935af297c223df308921ccc48c337270bfd56f049e8335933b9b93d8629\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://ae729935af297c223df308921ccc48c337270bfd56f049e8335933b9b93d8629\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:17:37Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://f3de48729c6b42b41a1b0b3cdc83d86023cbd30589d507af5af83c4a773d1b9a\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-flannel-ds-vk4m7\",\n                \"generateName\": \"kube-flannel-ds-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"608db391-49a9-44a4-8cbe-ca6aca5ccbe4\",\n                \"resourceVersion\": \"751\",\n                \"creationTimestamp\": \"2021-05-25T16:17:32Z\",\n                \"labels\": {\n                    \"app\": \"flannel\",\n                    \"controller-revision-hash\": \"7f578449d6\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-flannel-ds\",\n                        \"uid\": \"edbe1f66-ffd4-4ea1-ab84-e523f094f198\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run/flannel\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"dev-net\",\n                        \"hostPath\": {\n                            \"path\": \"/dev/net\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"flannel-cfg\",\n                        \"configMap\": {\n                            \"name\": \"kube-flannel-cfg\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-jc4c5\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"cp\"\n                        ],\n                        \"args\": [\n                            \"-f\",\n                            \"/etc/kube-flannel/cni-conf.json\",\n                            \"/etc/cni/net.d/10-flannel.conflist\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-jc4c5\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"command\": [\n                            \"/opt/bin/flanneld\"\n                        ],\n                        \"args\": [\n                            \"--ip-masq\",\n                            \"--kube-subnet-mgr\",\n                            \"--iptables-resync=5\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"POD_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.name\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run/flannel\"\n                            },\n                            {\n                                \"name\": \"dev-net\",\n                                \"mountPath\": \"/dev/net\"\n                            },\n                            {\n                                \"name\": \"flannel-cfg\",\n                                \"mountPath\": \"/etc/kube-flannel/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-jc4c5\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"NET_RAW\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"flannel\",\n                \"serviceAccount\": \"flannel\",\n                \"nodeName\": \"ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-54-92.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:38Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:39Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:39Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:17:32Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.54.92\",\n                \"podIP\": \"172.20.54.92\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.54.92\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:17:32Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"install-cni\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-25T16:17:38Z\",\n                                \"finishedAt\": \"2021-05-25T16:17:38Z\",\n                                \"containerID\": \"docker://07ccc7e54aa0c7db167bbb9ebfd5d975b8c21960a64812c694c022f3a60c441f\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://07ccc7e54aa0c7db167bbb9ebfd5d975b8c21960a64812c694c022f3a60c441f\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-flannel\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:17:38Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/coreos/flannel:v0.13.0\",\n                        \"imageID\": \"docker-pullable://quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8\",\n                        \"containerID\": \"docker://101bacca97cb0d5f8b68b3018f4aad4f71a02b6ee6c8d5c3c24dea2620b82c37\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1405fef7-481f-462a-9935-e85a9924dfe8\",\n                \"resourceVersion\": \"844\",\n                \"creationTimestamp\": \"2021-05-25T16:17:47Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"dd2bb0eda31c35b976b408cc90aff612\",\n                    \"kubernetes.io/config.mirror\": \"dd2bb0eda31c35b976b408cc90aff612\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:16:26.827704108Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-40-186.eu-west-3.compute.internal\",\n                        \"uid\": \"563f57ed-84c9-484c-9884-8d94874389be\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-40-186.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-40-186.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:27Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:29Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:29Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:27Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.40.186\",\n                \"podIP\": \"172.20.40.186\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.40.186\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:16:27Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:16:28Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://ca0e5ab8aa8cd42c9baddd9217d03cd5db8439ac59b3ce133c0be5de39455819\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cad6001e-3a32-4043-8cf4-05135eead7a1\",\n                \"resourceVersion\": \"504\",\n                \"creationTimestamp\": \"2021-05-25T16:16:15Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"466af20b2a4057e021405a51dab7d35b\",\n                    \"kubernetes.io/config.mirror\": \"466af20b2a4057e021405a51dab7d35b\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:14:49.696714547Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                        \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-44-17.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://127.0.0.1\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:07Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:07Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:14:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://fce928b6ec76deae5a0b636825595c3133cf7b76166928ab3602e8fb773b93e3\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b5ee6935-b27f-4ac3-bd19-9f6cb2280506\",\n                \"resourceVersion\": \"865\",\n                \"creationTimestamp\": \"2021-05-25T16:17:56Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"19370eb69d07a880de62846bdce6e20a\",\n                    \"kubernetes.io/config.mirror\": \"19370eb69d07a880de62846bdce6e20a\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:16:31.185561228Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                        \"uid\": \"1bb49170-eda7-4a2e-bd7a-45d44df4d31a\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-48-192.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-48-192.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:31Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:33Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:33Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:31Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.48.192\",\n                \"podIP\": \"172.20.48.192\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.48.192\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:16:31Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:16:33Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://e6cb0e4d3a555247e7b92fa70f3ab91f187c8b4b0304add91e8338fd19e1a08e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"92402818-312b-44c3-82dd-b502d7ec7ca5\",\n                \"resourceVersion\": \"868\",\n                \"creationTimestamp\": \"2021-05-25T16:17:54Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"65bce40e337fc2b2bd50709aab921732\",\n                    \"kubernetes.io/config.mirror\": \"65bce40e337fc2b2bd50709aab921732\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:16:31.371706645Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-54-92.eu-west-3.compute.internal\",\n                        \"uid\": \"50e2e33a-1f01-486d-ac46-8b6be79e3a7e\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-54-92.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-54-92.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:31Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:33Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:33Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:31Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.54.92\",\n                \"podIP\": \"172.20.54.92\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.54.92\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:16:31Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:16:33Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://8d9e4791bde051abfe221dcdb590eca7b7f0efeccf28d8447bb540b54bb97975\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c0bd2b3d-4b1f-41b2-9cbf-267c63873fad\",\n                \"resourceVersion\": \"860\",\n                \"creationTimestamp\": \"2021-05-25T16:17:50Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"b32ae3006edac41f054970ca45c6514e\",\n                    \"kubernetes.io/config.mirror\": \"b32ae3006edac41f054970ca45c6514e\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:16:30.014750794Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                        \"uid\": \"bba4ded3-c7fc-4487-ba5b-323df6f0595b\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-60-66.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-60-66.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:30Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:32Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:32Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:16:30Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.60.66\",\n                \"podIP\": \"172.20.60.66\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.60.66\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:16:30Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:16:31Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:4359e752b5961a66ad2e9cdb4aaa57e2782d2dd2f05a0cf76946f5e5caa0fd88\",\n                        \"containerID\": \"docker://b7dfcc532322862f41d6ee13e279af54ff1a8544ece97bebdc42093708106f81\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"230625cd-9fdb-4b83-b94f-b110efac5833\",\n                \"resourceVersion\": \"529\",\n                \"creationTimestamp\": \"2021-05-25T16:16:19Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"e41a67974f6151102605d2ffb0e98fe0\",\n                    \"kubernetes.io/config.mirror\": \"e41a67974f6151102605d2ffb0e98fe0\",\n                    \"kubernetes.io/config.seen\": \"2021-05-25T16:14:49.696727605Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                        \"uid\": \"69c81210-6a3d-4b0b-801a-61a9e470b0c8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-44-17.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:15:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-25T16:14:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.44.17\",\n                \"podIP\": \"172.20.44.17\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.44.17\"\n                    }\n                ],\n                \"startTime\": \"2021-05-25T16:14:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-25T16:15:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\",\n                        \"imageID\": \"docker://sha256:a4183b88f6e65972c4b176b43ca59de31868635a7e43805f4c6e26203de1742f\",\n                        \"containerID\": \"docker://0f65171d12ad29493fbfb592f8aa7a1cdc958e7f5e5108b339b7f95bf4ce16cf\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-6f594f4c58-llsrv ====\nI0525 16:17:43.969997       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI0525 16:17:44.223706       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI0525 16:17:44.225754       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI0525 16:17:44.225773       1 plugin.go:50] Set control mode to linear\nI0525 16:17:44.225779       1 linear_controller.go:60] ConfigMap version change (old:  new: 782) - rebuilding params\nI0525 16:17:44.225785       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI0525 16:17:44.225839       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI0525 16:17:44.227659       1 k8sclient.go:272] Cluster status: SchedulableNodes[5], SchedulableCores[10]\nI0525 16:17:44.227674       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-6f594f4c58-llsrv ====\n==== START logs for container coredns of pod kube-system/coredns-f45c4bf76-9lfbv ====\nW0525 16:18:03.133253       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0525 16:18:03.133888       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n.:53\n[INFO] plugin/reload: Running configuration MD5 = ce1e85197887ce49f3d78b19ce3dfa68\nCoreDNS-1.8.3\nlinux/amd64, go1.16, 4293992\n==== END logs for container coredns of pod kube-system/coredns-f45c4bf76-9lfbv ====\n==== START logs for container coredns of pod kube-system/coredns-f45c4bf76-xv9v7 ====\nW0525 16:17:46.729383       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0525 16:17:46.730220       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n.:53\n[INFO] plugin/reload: Running configuration MD5 = ce1e85197887ce49f3d78b19ce3dfa68\nCoreDNS-1.8.3\nlinux/amd64, go1.16, 4293992\n==== END logs for container coredns of pod kube-system/coredns-f45c4bf76-xv9v7 ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-54665f7b78-4c8mj ====\ndns-controller version 0.1\nI0525 16:15:59.858844       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI0525 16:15:59.858888       1 main.go:223] Ingress controller disabled\nI0525 16:15:59.860343       1 dnscontroller.go:108] starting DNS controller\nI0525 16:15:59.860344       1 node.go:60] starting node controller\nI0525 16:15:59.861276       1 dnscontroller.go:170] scope not yet ready: pod\nI0525 16:15:59.861289       1 pod.go:60] starting pod controller\nI0525 16:15:59.862049       1 service.go:60] starting service controller\nI0525 16:15:59.882329       1 dnscontroller.go:625] Update desired state: node/ip-172-20-44-17.eu-west-3.compute.internal: [{A node/ip-172-20-44-17.eu-west-3.compute.internal/internal 172.20.44.17 true} {A node/ip-172-20-44-17.eu-west-3.compute.internal/external 15.236.146.108 true} {A node/role=master/internal 172.20.44.17 true} {A node/role=master/external 15.236.146.108 true} {A node/role=master/ ip-172-20-44-17.eu-west-3.compute.internal true} {A node/role=master/ ip-172-20-44-17.eu-west-3.compute.internal true} {A node/role=master/ ec2-15-236-146-108.eu-west-3.compute.amazonaws.com true}]\nI0525 16:16:04.862317       1 dnscache.go:74] querying all DNS zones (no cached results)\nI0525 16:16:11.154527       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-l89sm: [{A kops-controller.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io. 172.20.44.17 false}]\nI0525 16:16:15.384878       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0525 16:16:15.384914       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0525 16:16:17.723054       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io.} [172.20.44.17]\nI0525 16:16:17.723090       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0525 16:16:46.981430       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal: [{_alias api.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io. node/ip-172-20-44-17.eu-west-3.compute.internal/external false}]\nI0525 16:16:48.004138       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0525 16:16:48.004167       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0525 16:16:50.008567       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-44-17.eu-west-3.compute.internal: [{_alias api.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io. node/ip-172-20-44-17.eu-west-3.compute.internal/external false} {A api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io. 172.20.44.17 false}]\nI0525 16:16:50.176994       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io.} [15.236.146.108]\nI0525 16:16:50.177031       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0525 16:16:55.392248       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0525 16:16:55.392282       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0525 16:16:57.102277       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io.} [172.20.44.17]\nI0525 16:16:57.102312       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0525 16:17:21.095810       1 dnscontroller.go:625] Update desired state: node/ip-172-20-40-186.eu-west-3.compute.internal: [{A node/ip-172-20-40-186.eu-west-3.compute.internal/internal 172.20.40.186 true} {A node/ip-172-20-40-186.eu-west-3.compute.internal/external 35.180.122.224 true} {A node/role=node/internal 172.20.40.186 true} {A node/role=node/external 35.180.122.224 true} {A node/role=node/ ip-172-20-40-186.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-40-186.eu-west-3.compute.internal true} {A node/role=node/ ec2-35-180-122-224.eu-west-3.compute.amazonaws.com true}]\nI0525 16:17:30.850627       1 dnscontroller.go:625] Update desired state: node/ip-172-20-60-66.eu-west-3.compute.internal: [{A node/ip-172-20-60-66.eu-west-3.compute.internal/internal 172.20.60.66 true} {A node/ip-172-20-60-66.eu-west-3.compute.internal/external 35.180.193.220 true} {A node/role=node/internal 172.20.60.66 true} {A node/role=node/external 35.180.193.220 true} {A node/role=node/ ip-172-20-60-66.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-60-66.eu-west-3.compute.internal true} {A node/role=node/ ec2-35-180-193-220.eu-west-3.compute.amazonaws.com true}]\nI0525 16:17:32.010526       1 dnscontroller.go:625] Update desired state: node/ip-172-20-48-192.eu-west-3.compute.internal: [{A node/ip-172-20-48-192.eu-west-3.compute.internal/internal 172.20.48.192 true} {A node/ip-172-20-48-192.eu-west-3.compute.internal/external 15.237.112.171 true} {A node/role=node/internal 172.20.48.192 true} {A node/role=node/external 15.237.112.171 true} {A node/role=node/ ip-172-20-48-192.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-48-192.eu-west-3.compute.internal true} {A node/role=node/ ec2-15-237-112-171.eu-west-3.compute.amazonaws.com true}]\nI0525 16:17:32.193132       1 dnscontroller.go:625] Update desired state: node/ip-172-20-54-92.eu-west-3.compute.internal: [{A node/ip-172-20-54-92.eu-west-3.compute.internal/internal 172.20.54.92 true} {A node/ip-172-20-54-92.eu-west-3.compute.internal/external 15.188.147.0 true} {A node/role=node/internal 172.20.54.92 true} {A node/role=node/external 15.188.147.0 true} {A node/role=node/ ip-172-20-54-92.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-54-92.eu-west-3.compute.internal true} {A node/role=node/ ec2-15-188-147-0.eu-west-3.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-54665f7b78-4c8mj ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal ====\netcd-manager\nI0525 16:15:15.378211    6417 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0525 16:15:15.380970    6417 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0525 16:15:15.381966    6417 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0525 16:15:15.382906    6417 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0525 16:15:15.384470    6417 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0525 16:15:15.385451    6417 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI0525 16:15:15.388200    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:15:15.524759    6417 mounter.go:304] Trying to mount master volume: \"vol-02d12313553cda503\"\nI0525 16:15:15.524778    6417 volumes.go:331] Trying to attach volume \"vol-02d12313553cda503\" at \"/dev/xvdu\"\nI0525 16:15:15.524932    6417 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0525 16:15:15.849010    6417 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-05-25 16:15:15.834 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-056f852792822a839\",\n  State: \"attaching\",\n  VolumeId: \"vol-02d12313553cda503\"\n}\nI0525 16:15:15.849197    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:15:15.919690    6417 mounter.go:318] Currently attached volumes: [0xc00049b180]\nI0525 16:15:15.919710    6417 mounter.go:72] Master volume \"vol-02d12313553cda503\" is attached at \"/dev/xvdu\"\nI0525 16:15:15.920182    6417 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-02d12313553cda503\nI0525 16:15:15.920213    6417 volumes.go:234] volume vol-02d12313553cda503 not mounted at /rootfs/dev/xvdu\nI0525 16:15:15.920300    6417 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol02d12313553cda503\"\nI0525 16:15:15.920312    6417 volumes.go:251] volume vol-02d12313553cda503 not mounted at nvme-Amazon_Elastic_Block_Store_vol02d12313553cda503\nI0525 16:15:15.920316    6417 mounter.go:121] Waiting for volume \"vol-02d12313553cda503\" to be mounted\nI0525 16:15:16.920663    6417 volumes.go:234] volume vol-02d12313553cda503 not mounted at /rootfs/dev/xvdu\nI0525 16:15:16.920712    6417 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol02d12313553cda503\" at \"/dev/nvme1n1\"\nI0525 16:15:16.920722    6417 mounter.go:125] Found volume \"vol-02d12313553cda503\" mounted at device \"/dev/nvme1n1\"\nI0525 16:15:16.921356    6417 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-02d12313553cda503\"\nI0525 16:15:16.921421    6417 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-02d12313553cda503\"\nI0525 16:15:16.921429    6417 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0525 16:15:16.921447    6417 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0525 16:15:16.940641    6417 mount_linux.go:449] Output: \"\"\nI0525 16:15:16.940668    6417 mount_linux.go:408] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI0525 16:15:16.940686    6417 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI0525 16:15:17.232367    6417 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-02d12313553cda503\nI0525 16:15:17.232385    6417 mount_linux.go:436] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-02d12313553cda503\nI0525 16:15:17.232399    6417 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-02d12313553cda503 ext4 [defaults]\nI0525 16:15:17.232421    6417 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-02d12313553cda503 --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-02d12313553cda503]\nI0525 16:15:17.262371    6417 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-02d12313553cda503: Running scope as unit: run-rbaa0271df4c3417591493803f01e2235.scope\nI0525 16:15:17.262396    6417 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0525 16:15:17.262418    6417 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0525 16:15:17.281228    6417 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI0525 16:15:17.281246    6417 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI0525 16:15:17.281257    6417 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI0525 16:15:17.283600    6417 resizefs_linux.go:68] Device /dev/nvme1n1 resized successfully\nI0525 16:15:17.300018    6417 mount_linux.go:206] Detected OS with systemd\nI0525 16:15:17.300875    6417 mounter.go:262] device \"/dev/nvme1n1\" did not evaluate as a symlink: lstat /dev/nvme1n1: no such file or directory\nI0525 16:15:17.300901    6417 mounter.go:262] device \"/dev/nvme1n1\" did not evaluate as a symlink: lstat /dev/nvme1n1: no such file or directory\nI0525 16:15:17.300907    6417 mounter.go:242] matched device \"/dev/nvme1n1\" and \"/dev/nvme1n1\" via '\\x00'\nI0525 16:15:17.300917    6417 mounter.go:94] mounted master volume \"vol-02d12313553cda503\" on /mnt/master-vol-02d12313553cda503\nI0525 16:15:17.300928    6417 main.go:320] discovered IP address: 172.20.44.17\nI0525 16:15:17.300933    6417 main.go:325] Setting data dir to /rootfs/mnt/master-vol-02d12313553cda503\nI0525 16:15:17.431141    6417 certs.go:183] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI0525 16:15:17.700005    6417 certs.go:183] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI0525 16:15:17.705420    6417 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI0525 16:15:17.705860    6417 main.go:474] peerClientIPs: [172.20.44.17]\nI0525 16:15:17.945143    6417 certs.go:183] generating certificate for \"etcd-manager-etcd-events-a\"\nI0525 16:15:17.953894    6417 server.go:105] GRPC server listening on \"172.20.44.17:3997\"\nI0525 16:15:17.954416    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:15:18.058848    6417 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:15:18.110803    6417 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.44.17 0} {172.20.44.17 0}]\nI0525 16:15:18.111012    6417 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI0525 16:15:18.113484    6417 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:19.954267    6417 controller.go:189] starting controller iteration\nI0525 16:15:19.954681    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:15:19.954852    6417 commands.go:41] refreshing commands\nI0525 16:15:19.954939    6417 s3context.go:334] product_uuid is \"ec2fff1a-06e0-467e-bca7-1d39ac3352bc\", assuming running on EC2\nI0525 16:15:19.956597    6417 s3context.go:166] got region from metadata: \"eu-west-3\"\nI0525 16:15:19.982826    6417 s3context.go:213] found bucket in region \"us-west-1\"\nI0525 16:15:20.657382    6417 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0525 16:15:20.657405    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0525 16:15:30.816782    6417 controller.go:189] starting controller iteration\nI0525 16:15:30.816811    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:30.817097    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:15:30.817284    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:30.817648    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > }\nI0525 16:15:30.817707    6417 controller.go:303] etcd cluster members: map[]\nI0525 16:15:30.817717    6417 controller.go:641] sending member map to all peers: \nI0525 16:15:30.817987    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:30.818018    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:15:31.424684    6417 controller.go:359] detected that there is no existing cluster\nI0525 16:15:31.424699    6417 commands.go:41] refreshing commands\nI0525 16:15:31.744192    6417 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0525 16:15:31.744214    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0525 16:15:31.898280    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:31.898593    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:31.898631    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:31.898713    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:31.898851    6417 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > }]\nI0525 16:15:31.899296    6417 newcluster.go:153] JoinClusterResponse: \nI0525 16:15:31.900154    6417 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0525 16:15:31.900186    6417 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\nI0525 16:15:31.901069    6417 pki.go:59] adding peerClientIPs [172.20.44.17]\nI0525 16:15:31.901094    6417 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[172.20.44.17 127.0.0.1]} Usages:[2 1]}\nI0525 16:15:32.140265    6417 certs.go:183] generating certificate for \"etcd-events-a\"\nI0525 16:15:32.142303    6417 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0525 16:15:32.483193    6417 certs.go:183] generating certificate for \"etcd-events-a\"\nI0525 16:15:32.650459    6417 certs.go:183] generating certificate for \"etcd-events-a\"\nI0525 16:15:32.653271    6417 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0525 16:15:32.654059    6417 newcluster.go:171] JoinClusterResponse: \nI0525 16:15:32.654110    6417 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0525 16:15:32.654125    6417 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-05-25 16:15:32.661059 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\n2021-05-25 16:15:32.661227 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.crt\n2021-05-25 16:15:32.661279 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:32.661368 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\n2021-05-25 16:15:32.661430 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-05-25 16:15:32.661520 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\n2021-05-25 16:15:32.661589 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\n2021-05-25 16:15:32.661657 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-05-25 16:15:32.661701 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=cRBz9Vos1TemEFFcXLXVJA\n2021-05-25 16:15:32.661788 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.key\n2021-05-25 16:15:32.661833 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-05-25 16:15:32.661917 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-05-25 16:15:32.661961 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-05-25 16:15:32.662051 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-05-25 16:15:32.662107 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-05-25 16:15:32.662200 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.crt\n2021-05-25 16:15:32.662259 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:32.662332 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.key\n2021-05-25 16:15:32.662393 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/ca.crt\n2021-05-25 16:15:32.662472 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/ca.crt\n2021-05-25 16:15:32.662550 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.662Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.662Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.crt, key = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.664Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.665Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"cRBz9Vos1TemEFFcXLXVJA\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.673Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA/member/snap/db\",\"took\":\"7.470622ms\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.673Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.44.17:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.673Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.44.17:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.678Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"4d947a80aae4e559\",\"cluster-id\":\"2d9cffed9c1df564\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.678Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"4d947a80aae4e559 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.678Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"4d947a80aae4e559 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.678Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 4d947a80aae4e559 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.678Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"4d947a80aae4e559 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.678Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"4d947a80aae4e559 switched to configuration voters=(5590227730515158361)\"}\n{\"level\":\"warn\",\"ts\":\"2021-05-25T16:15:32.681Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.684Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.687Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"4d947a80aae4e559\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.688Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"4d947a80aae4e559\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.688Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"4d947a80aae4e559 switched to configuration voters=(5590227730515158361)\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.688Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"2d9cffed9c1df564\",\"local-member-id\":\"4d947a80aae4e559\",\"added-peer-id\":\"4d947a80aae4e559\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.690Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.crt, key = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.690Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.691Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"4d947a80aae4e559\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\nI0525 16:15:32.989724    6417 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.078Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"4d947a80aae4e559 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.078Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"4d947a80aae4e559 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.078Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"4d947a80aae4e559 received MsgVoteResp from 4d947a80aae4e559 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.078Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"4d947a80aae4e559 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.078Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 4d947a80aae4e559 elected leader 4d947a80aae4e559 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.079Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"4d947a80aae4e559\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/4d947a80aae4e559/attributes\",\"cluster-id\":\"2d9cffed9c1df564\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.079Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.079Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"2d9cffed9c1df564\",\"local-member-id\":\"4d947a80aae4e559\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.080Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.080Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.080Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\nI0525 16:15:33.156077    6417 controller.go:189] starting controller iteration\nI0525 16:15:33.156100    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:33.156424    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:15:33.156555    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:33.157503    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995]\nI0525 16:15:33.171057    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0525 16:15:33.171164    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:15:33.171184    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:33.171378    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:33.171391    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:33.171446    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:33.171530    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:33.171541    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:15:33.330400    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:33.331215    6417 backup.go:134] performing snapshot save to /tmp/470182197/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.336Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.337Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.337Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.337Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.338Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0525 16:15:33.339030    6417 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/2021-05-25T16:15:33Z-000001/etcd.backup.gz\"\nI0525 16:15:33.507181    6417 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/2021-05-25T16:15:33Z-000001/_etcd_backup.meta\"\nI0525 16:15:33.683808    6417 backup.go:159] backup complete: name:\"2021-05-25T16:15:33Z-000001\" \nI0525 16:15:33.684222    6417 controller.go:937] backup response: name:\"2021-05-25T16:15:33Z-000001\" \nI0525 16:15:33.684243    6417 controller.go:576] took backup: name:\"2021-05-25T16:15:33Z-000001\" \nI0525 16:15:33.848362    6417 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events: [2021-05-25T16:15:33Z-000001]\nI0525 16:15:33.848514    6417 cleanup.go:166] retaining backup \"2021-05-25T16:15:33Z-000001\"\nI0525 16:15:33.848569    6417 restore.go:98] Setting quarantined state to false\nI0525 16:15:33.848902    6417 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" cluster_name:\"etcd-events\" > \nI0525 16:15:33.848939    6417 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" cluster_name:\"etcd-events\" > \nI0525 16:15:33.848948    6417 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\nI0525 16:15:33.849793    6417 etcdprocess.go:131] Waiting for etcd to exit\nI0525 16:15:33.950280    6417 etcdprocess.go:131] Waiting for etcd to exit\nI0525 16:15:33.950295    6417 etcdprocess.go:136] Exited etcd: signal: killed\nI0525 16:15:33.950355    6417 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0525 16:15:33.950479    6417 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0525 16:15:33.950488    6417 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0525 16:15:33.950516    6417 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\nI0525 16:15:33.950595    6417 pki.go:59] adding peerClientIPs [172.20.44.17]\nI0525 16:15:33.950612    6417 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[172.20.44.17 127.0.0.1]} Usages:[2 1]}\nI0525 16:15:33.950842    6417 certs.go:122] existing certificate not valid after 2023-05-25T16:15:32Z; will regenerate\nI0525 16:15:33.950855    6417 certs.go:183] generating certificate for \"etcd-events-a\"\nI0525 16:15:33.952815    6417 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0525 16:15:33.952993    6417 certs.go:122] existing certificate not valid after 2023-05-25T16:15:32Z; will regenerate\nI0525 16:15:33.953005    6417 certs.go:183] generating certificate for \"etcd-events-a\"\nI0525 16:15:34.070809    6417 certs.go:183] generating certificate for \"etcd-events-a\"\nI0525 16:15:34.072673    6417 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0525 16:15:34.073120    6417 restore.go:116] ReconfigureResponse: \nI0525 16:15:34.074277    6417 controller.go:189] starting controller iteration\nI0525 16:15:34.074299    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:34.074520    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:15:34.074646    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:34.075367    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\n2021-05-25 16:15:34.079671 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\n2021-05-25 16:15:34.079705 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.crt\n2021-05-25 16:15:34.079792 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:34.079808 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\n2021-05-25 16:15:34.079880 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-05-25 16:15:34.079956 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\n2021-05-25 16:15:34.079966 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\n2021-05-25 16:15:34.079971 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-05-25 16:15:34.079978 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=cRBz9Vos1TemEFFcXLXVJA\n2021-05-25 16:15:34.079984 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.key\n2021-05-25 16:15:34.079991 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4002\n2021-05-25 16:15:34.080064 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-05-25 16:15:34.080071 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-05-25 16:15:34.080080 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-05-25 16:15:34.080088 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-05-25 16:15:34.080101 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.crt\n2021-05-25 16:15:34.080106 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:34.080206 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.key\n2021-05-25 16:15:34.080218 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/ca.crt\n2021-05-25 16:15:34.080277 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/ca.crt\n2021-05-25 16:15:34.080338 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.080Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.080Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.080Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.crt, key = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.081Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.081Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.081Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-02d12313553cda503/data/cRBz9Vos1TemEFFcXLXVJA/member/snap/db\",\"took\":\"107.832µs\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.082Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"2d9cffed9c1df564\",\"local-member-id\":\"4d947a80aae4e559\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.082Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"4d947a80aae4e559 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.083Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"4d947a80aae4e559 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.083Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 4d947a80aae4e559 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-05-25T16:15:34.084Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.085Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.087Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"4d947a80aae4e559\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.087Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"4d947a80aae4e559 switched to configuration voters=(5590227730515158361)\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.087Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"2d9cffed9c1df564\",\"local-member-id\":\"4d947a80aae4e559\",\"added-peer-id\":\"4d947a80aae4e559\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.087Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"2d9cffed9c1df564\",\"local-member-id\":\"4d947a80aae4e559\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.087Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.087Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"4d947a80aae4e559\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.095Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.crt, key = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-02d12313553cda503/pki/cRBz9Vos1TemEFFcXLXVJA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.095Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.095Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"4d947a80aae4e559\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.283Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"4d947a80aae4e559 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.283Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"4d947a80aae4e559 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.283Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"4d947a80aae4e559 received MsgVoteResp from 4d947a80aae4e559 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.283Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"4d947a80aae4e559 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.283Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 4d947a80aae4e559 elected leader 4d947a80aae4e559 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.286Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"4d947a80aae4e559\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]}\",\"request-path\":\"/0/members/4d947a80aae4e559/attributes\",\"cluster-id\":\"2d9cffed9c1df564\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.287Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\nI0525 16:15:35.089327    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:15:35.089429    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:15:35.089654    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:35.089881    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:35.089899    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:35.089953    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:35.090037    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:35.090049    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:15:35.244260    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:35.244335    6417 controller.go:557] controller loop complete\nI0525 16:15:45.247412    6417 controller.go:189] starting controller iteration\nI0525 16:15:45.247450    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:45.247774    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:15:45.247971    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:45.248594    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:15:45.264313    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:15:45.264832    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:15:45.264862    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:45.265064    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:45.265080    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:45.265131    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:45.265204    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:45.265319    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:15:45.874132    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:45.874295    6417 controller.go:557] controller loop complete\nI0525 16:15:55.875696    6417 controller.go:189] starting controller iteration\nI0525 16:15:55.875726    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:55.876064    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:15:55.876208    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:15:55.876548    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:15:55.890507    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:15:55.890628    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:15:55.890724    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:55.890983    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:55.891002    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:55.891193    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:55.891318    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:55.891333    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:15:56.577288    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:56.577496    6417 controller.go:557] controller loop complete\nI0525 16:16:06.578698    6417 controller.go:189] starting controller iteration\nI0525 16:16:06.578875    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:06.579171    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:16:06.579298    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:06.579933    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:16:06.591312    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:06.591408    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:16:06.591587    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:06.591943    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:06.591964    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:06.592120    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:06.592264    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:06.592282    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:16:07.192483    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:07.192559    6417 controller.go:557] controller loop complete\nI0525 16:16:17.194308    6417 controller.go:189] starting controller iteration\nI0525 16:16:17.194488    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:17.194811    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:16:17.195035    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:17.195960    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:16:17.212064    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:17.212363    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:16:17.212393    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:17.212671    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:17.212689    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:17.212818    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:17.212986    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:17.213002    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:16:17.806883    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:17.806951    6417 controller.go:557] controller loop complete\nI0525 16:16:18.119329    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:16:18.216295    6417 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:16:18.265091    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:18.265303    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:27.808722    6417 controller.go:189] starting controller iteration\nI0525 16:16:27.808752    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:27.809224    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:16:27.809489    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:27.810154    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:16:27.825588    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:27.825670    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:16:27.825691    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:27.826024    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:27.826041    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:27.826093    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:27.826174    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:27.826186    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:16:28.422232    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:28.422313    6417 controller.go:557] controller loop complete\nI0525 16:16:38.423499    6417 controller.go:189] starting controller iteration\nI0525 16:16:38.423530    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:38.423945    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:16:38.424237    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:38.424636    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:16:38.436521    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:38.436601    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:16:38.436757    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:38.436989    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:38.437006    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:38.437173    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:38.437263    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:38.437275    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:16:39.036692    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:39.036766    6417 controller.go:557] controller loop complete\nI0525 16:16:49.038025    6417 controller.go:189] starting controller iteration\nI0525 16:16:49.038188    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:49.038480    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:16:49.038620    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:49.039183    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:16:49.053089    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:49.053166    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:16:49.053185    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:49.053375    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:49.053389    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:49.053442    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:49.053528    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:49.053540    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:16:49.654226    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:49.654330    6417 controller.go:557] controller loop complete\nI0525 16:16:59.655521    6417 controller.go:189] starting controller iteration\nI0525 16:16:59.655710    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:59.656131    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:16:59.656349    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:16:59.656836    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:16:59.668454    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:59.668567    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:16:59.668715    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:59.668952    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:59.668972    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:59.669122    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:59.669287    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:59.669385    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:17:00.268832    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:00.269017    6417 controller.go:557] controller loop complete\nI0525 16:17:10.270701    6417 controller.go:189] starting controller iteration\nI0525 16:17:10.270733    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:10.271150    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:17:10.271361    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:10.271720    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:17:10.283901    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:10.284013    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:17:10.284034    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:10.284273    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:10.284285    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:10.284352    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:10.284475    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:10.284488    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:17:10.876871    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:10.877068    6417 controller.go:557] controller loop complete\nI0525 16:17:18.265948    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:17:18.321826    6417 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:17:18.370827    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:18.370924    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:20.878269    6417 controller.go:189] starting controller iteration\nI0525 16:17:20.878483    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:20.878787    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:17:20.879014    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:20.879685    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:17:20.891159    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:20.891315    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:17:20.891387    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:20.891588    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:20.891605    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:20.891676    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:20.891801    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:20.891897    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:17:21.491005    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:21.491073    6417 controller.go:557] controller loop complete\nI0525 16:17:31.493251    6417 controller.go:189] starting controller iteration\nI0525 16:17:31.493279    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:31.493650    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:17:31.493777    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:31.494658    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:17:31.509884    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:31.510376    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:17:31.510556    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:31.510897    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:31.511067    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:31.511277    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:31.511530    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:31.511633    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:17:32.104357    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:32.104441    6417 controller.go:557] controller loop complete\nI0525 16:17:42.106493    6417 controller.go:189] starting controller iteration\nI0525 16:17:42.106533    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:42.107589    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:17:42.107721    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:42.109359    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:17:42.122218    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:42.122345    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:17:42.122476    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:42.122637    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:42.122648    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:42.122687    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:42.122749    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:42.122759    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:17:42.726668    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:42.726751    6417 controller.go:557] controller loop complete\nI0525 16:17:52.728753    6417 controller.go:189] starting controller iteration\nI0525 16:17:52.728784    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:52.729285    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:17:52.729419    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:17:52.730425    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:17:52.747940    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:52.748024    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:17:52.748061    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:52.748310    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:52.748331    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:52.748397    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:52.748517    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:52.748549    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:17:53.340024    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:53.340230    6417 controller.go:557] controller loop complete\nI0525 16:18:03.342104    6417 controller.go:189] starting controller iteration\nI0525 16:18:03.342296    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:03.342631    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:18:03.342831    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:03.343397    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:18:03.354982    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:03.355064    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:18:03.355084    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:03.355377    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:03.355395    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:03.355474    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:03.355610    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:03.355630    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:18:03.956009    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:03.956081    6417 controller.go:557] controller loop complete\nI0525 16:18:13.958230    6417 controller.go:189] starting controller iteration\nI0525 16:18:13.958344    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:13.958644    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:18:13.958866    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:13.959850    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:18:13.980031    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:13.980231    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:18:13.980263    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:13.980429    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:13.980448    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:13.980511    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:13.980590    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:13.980603    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:18:14.579072    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:14.579164    6417 controller.go:557] controller loop complete\nI0525 16:18:18.371598    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:18:18.470132    6417 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:18:18.517886    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:18.517960    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:24.581143    6417 controller.go:189] starting controller iteration\nI0525 16:18:24.581171    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:24.581481    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:18:24.581629    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:24.582251    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:18:24.596184    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:24.596261    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:18:24.596295    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:24.596536    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:24.596557    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:24.596623    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:24.596744    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:24.596759    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:18:25.191099    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:25.191342    6417 controller.go:557] controller loop complete\nI0525 16:18:35.192520    6417 controller.go:189] starting controller iteration\nI0525 16:18:35.192551    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:35.193064    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:18:35.193308    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:35.193786    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:18:35.205527    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:35.205695    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:18:35.205728    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:35.205925    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:35.205939    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:35.206009    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:35.206127    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:35.206141    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:18:35.798231    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:35.798334    6417 controller.go:557] controller loop complete\nI0525 16:18:45.799962    6417 controller.go:189] starting controller iteration\nI0525 16:18:45.799990    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:45.800300    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:18:45.800523    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:45.801074    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:18:45.818064    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:45.818137    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:18:45.818155    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:45.818368    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:45.818381    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:45.818433    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:45.818525    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:45.818544    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:18:46.417915    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:46.417989    6417 controller.go:557] controller loop complete\nI0525 16:18:56.419178    6417 controller.go:189] starting controller iteration\nI0525 16:18:56.419208    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:56.419685    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:18:56.419998    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:18:56.420539    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:18:56.432819    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:56.432899    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:18:56.433052    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:56.433260    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:56.433281    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:56.433332    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:56.433416    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:56.433429    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:18:57.037281    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:57.037519    6417 controller.go:557] controller loop complete\nI0525 16:19:07.039196    6417 controller.go:189] starting controller iteration\nI0525 16:19:07.039372    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:07.039701    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:19:07.039877    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:07.040556    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:19:07.054639    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:19:07.054711    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:19:07.054759    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:19:07.055016    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:07.055032    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:07.055107    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:19:07.055229    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:19:07.055244    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:19:07.657626    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:19:07.657699    6417 controller.go:557] controller loop complete\nI0525 16:19:17.659110    6417 controller.go:189] starting controller iteration\nI0525 16:19:17.659143    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:17.659362    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:19:17.659481    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:17.659842    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:19:17.673065    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:19:17.673155    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:19:17.673171    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:19:17.673604    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:17.673644    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:17.673725    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:19:17.673884    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:19:17.673928    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:19:18.267185    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:19:18.267387    6417 controller.go:557] controller loop complete\nI0525 16:19:18.518584    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:19:18.571188    6417 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:19:18.635320    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:18.635394    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:19:28.268800    6417 controller.go:189] starting controller iteration\nI0525 16:19:28.268925    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:28.269230    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:19:28.269431    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:28.270451    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:19:28.282781    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:19:28.282859    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:19:28.282877    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:19:28.283213    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:28.283230    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:28.283284    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:19:28.283367    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:19:28.283379    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:19:28.880811    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:19:28.880987    6417 controller.go:557] controller loop complete\nI0525 16:19:38.882434    6417 controller.go:189] starting controller iteration\nI0525 16:19:38.882512    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:38.882835    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:19:38.883090    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:38.883731    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:19:38.895784    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:19:38.895878    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:19:38.895893    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:19:38.896122    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:38.896135    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:38.896191    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:19:38.896271    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:19:38.896284    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:19:39.491989    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:19:39.492236    6417 controller.go:557] controller loop complete\nI0525 16:19:49.493827    6417 controller.go:189] starting controller iteration\nI0525 16:19:49.493956    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:49.494256    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:19:49.494484    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:19:49.495506    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:19:49.508628    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:19:49.508709    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:19:49.508728    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:19:49.508927    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:49.508940    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:19:49.508995    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:19:49.509076    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:19:49.509088    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:19:50.101113    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:19:50.101191    6417 controller.go:557] controller loop complete\nI0525 16:20:00.103086    6417 controller.go:189] starting controller iteration\nI0525 16:20:00.103320    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:00.103625    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:20:00.103764    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:00.104204    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:20:00.117176    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:20:00.117884    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:20:00.118027    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:20:00.118316    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:00.118333    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:00.118389    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:00.118483    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:20:00.118494    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:20:00.708876    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:20:00.708958    6417 controller.go:557] controller loop complete\nI0525 16:20:10.710569    6417 controller.go:189] starting controller iteration\nI0525 16:20:10.710598    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:10.710927    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:20:10.711159    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:10.715356    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:20:10.727176    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:20:10.727255    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:20:10.727331    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:20:10.727611    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:10.727641    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:10.727728    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:10.727853    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:20:10.727869    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:20:11.330265    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:20:11.330336    6417 controller.go:557] controller loop complete\nI0525 16:20:18.636148    6417 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:20:18.702096    6417 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:20:18.753019    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:18.753098    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:21.331690    6417 controller.go:189] starting controller iteration\nI0525 16:20:21.331720    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:21.332020    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:20:21.332175    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:21.332654    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:20:21.346665    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:20:21.346777    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:20:21.346795    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:20:21.347708    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:21.347777    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:21.347868    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:21.348305    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:20:21.348345    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:20:21.937809    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:20:21.938026    6417 controller.go:557] controller loop complete\nI0525 16:20:31.939380    6417 controller.go:189] starting controller iteration\nI0525 16:20:31.939410    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:31.939809    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:20:31.940053    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:31.940651    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:20:31.953574    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:20:31.953829    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:20:31.953890    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:20:31.954067    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:31.954147    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:31.954226    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:31.954349    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:20:31.954366    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:20:32.552277    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:20:32.552706    6417 controller.go:557] controller loop complete\nI0525 16:20:42.554067    6417 controller.go:189] starting controller iteration\nI0525 16:20:42.554299    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:42.554591    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:20:42.554726    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:42.555427    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:20:42.573637    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:20:42.573833    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:20:42.573861    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:20:42.574130    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:42.574149    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:42.574326    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:42.574432    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:20:42.574448    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:20:43.161956    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:20:43.162076    6417 controller.go:557] controller loop complete\nI0525 16:20:53.163975    6417 controller.go:189] starting controller iteration\nI0525 16:20:53.164005    6417 controller.go:266] Broadcasting leadership assertion with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:53.164234    6417 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > leadership_token:\"LUZ_MEyKc2YFDoaKt2IVVQ\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" > > \nI0525 16:20:53.164356    6417 controller.go:295] I am leader with token \"LUZ_MEyKc2YFDoaKt2IVVQ\"\nI0525 16:20:53.164755    6417 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002]\nI0525 16:20:53.177084    6417 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.44.17:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"cRBz9Vos1TemEFFcXLXVJA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:20:53.177169    6417 controller.go:303] etcd cluster members: map[5590227730515158361:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4002\"],\"ID\":\"5590227730515158361\"}]\nI0525 16:20:53.177185    6417 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:20:53.177448    6417 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:53.177464    6417 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-events-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:20:53.177517    6417 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:20:53.177609    6417 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:20:53.177622    6417 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0525 16:20:53.774896    6417 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:20:53.774969    6417 controller.go:557] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-44-17.eu-west-3.compute.internal ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-44-17.eu-west-3.compute.internal ====\netcd-manager\nI0525 16:15:15.436126    6450 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0525 16:15:15.438994    6450 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0525 16:15:15.440753    6450 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0525 16:15:15.441611    6450 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0525 16:15:15.442306    6450 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0525 16:15:15.443126    6450 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/main k8s.io/role/master=1 kubernetes.io/cluster/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/main\nI0525 16:15:15.445790    6450 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:15:15.560825    6450 mounter.go:304] Trying to mount master volume: \"vol-0f0842ab5764ff968\"\nI0525 16:15:15.560843    6450 volumes.go:331] Trying to attach volume \"vol-0f0842ab5764ff968\" at \"/dev/xvdu\"\nI0525 16:15:15.560962    6450 volumes.go:86] AWS API Request: ec2/AttachVolume\nW0525 16:15:15.783331    6450 volumes.go:343] Invalid value '/dev/xvdu' for unixDevice. Attachment point /dev/xvdu is already in use\nI0525 16:15:15.783349    6450 volumes.go:331] Trying to attach volume \"vol-0f0842ab5764ff968\" at \"/dev/xvdv\"\nI0525 16:15:15.783494    6450 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0525 16:15:16.166448    6450 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-05-25 16:15:16.153 +0000 UTC,\n  Device: \"/dev/xvdv\",\n  InstanceId: \"i-056f852792822a839\",\n  State: \"attaching\",\n  VolumeId: \"vol-0f0842ab5764ff968\"\n}\nI0525 16:15:16.166594    6450 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:15:16.270012    6450 mounter.go:318] Currently attached volumes: [0xc000324080]\nI0525 16:15:16.270071    6450 mounter.go:72] Master volume \"vol-0f0842ab5764ff968\" is attached at \"/dev/xvdv\"\nI0525 16:15:16.270103    6450 mounter.go:86] Doing safe-format-and-mount of /dev/xvdv to /mnt/master-vol-0f0842ab5764ff968\nI0525 16:15:16.270135    6450 volumes.go:234] volume vol-0f0842ab5764ff968 not mounted at /rootfs/dev/xvdv\nI0525 16:15:16.270168    6450 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f0842ab5764ff968\"\nI0525 16:15:16.270191    6450 volumes.go:251] volume vol-0f0842ab5764ff968 not mounted at nvme-Amazon_Elastic_Block_Store_vol0f0842ab5764ff968\nI0525 16:15:16.270211    6450 mounter.go:121] Waiting for volume \"vol-0f0842ab5764ff968\" to be mounted\nI0525 16:15:17.270304    6450 volumes.go:234] volume vol-0f0842ab5764ff968 not mounted at /rootfs/dev/xvdv\nI0525 16:15:17.270349    6450 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0f0842ab5764ff968\" at \"/dev/nvme2n1\"\nI0525 16:15:17.270359    6450 mounter.go:125] Found volume \"vol-0f0842ab5764ff968\" mounted at device \"/dev/nvme2n1\"\nI0525 16:15:17.270876    6450 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0f0842ab5764ff968\"\nI0525 16:15:17.270955    6450 mounter.go:176] Mounting device \"/dev/nvme2n1\" on \"/mnt/master-vol-0f0842ab5764ff968\"\nI0525 16:15:17.270968    6450 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0525 16:15:17.270992    6450 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0525 16:15:17.288546    6450 mount_linux.go:449] Output: \"\"\nI0525 16:15:17.288604    6450 mount_linux.go:408] Disk \"/dev/nvme2n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme2n1]\nI0525 16:15:17.288632    6450 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme2n1]\nI0525 16:15:17.530290    6450 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme2n1 /mnt/master-vol-0f0842ab5764ff968\nI0525 16:15:17.530308    6450 mount_linux.go:436] Attempting to mount disk /dev/nvme2n1 in ext4 format at /mnt/master-vol-0f0842ab5764ff968\nI0525 16:15:17.530321    6450 nsenter.go:80] nsenter mount /dev/nvme2n1 /mnt/master-vol-0f0842ab5764ff968 ext4 [defaults]\nI0525 16:15:17.530345    6450 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0f0842ab5764ff968 --scope -- /bin/mount -t ext4 -o defaults /dev/nvme2n1 /mnt/master-vol-0f0842ab5764ff968]\nI0525 16:15:17.554009    6450 nsenter.go:84] Output of mounting /dev/nvme2n1 to /mnt/master-vol-0f0842ab5764ff968: Running scope as unit: run-r99ee18b78ef242dd875d233f7b4655de.scope\nI0525 16:15:17.554033    6450 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0525 16:15:17.554053    6450 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0525 16:15:17.576487    6450 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme2n1\\nTYPE=ext4\\n\"\nI0525 16:15:17.576520    6450 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme2n1\nI0525 16:15:17.576534    6450 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme2n1]\nI0525 16:15:17.579943    6450 resizefs_linux.go:68] Device /dev/nvme2n1 resized successfully\nI0525 16:15:17.594006    6450 mount_linux.go:206] Detected OS with systemd\nI0525 16:15:17.595100    6450 mounter.go:262] device \"/dev/nvme2n1\" did not evaluate as a symlink: lstat /dev/nvme2n1: no such file or directory\nI0525 16:15:17.595123    6450 mounter.go:262] device \"/dev/nvme2n1\" did not evaluate as a symlink: lstat /dev/nvme2n1: no such file or directory\nI0525 16:15:17.595130    6450 mounter.go:242] matched device \"/dev/nvme2n1\" and \"/dev/nvme2n1\" via '\\x00'\nI0525 16:15:17.595142    6450 mounter.go:94] mounted master volume \"vol-0f0842ab5764ff968\" on /mnt/master-vol-0f0842ab5764ff968\nI0525 16:15:17.595152    6450 main.go:320] discovered IP address: 172.20.44.17\nI0525 16:15:17.595156    6450 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0f0842ab5764ff968\nI0525 16:15:17.994827    6450 certs.go:183] generating certificate for \"etcd-manager-server-etcd-a\"\nI0525 16:15:18.142405    6450 certs.go:183] generating certificate for \"etcd-manager-client-etcd-a\"\nI0525 16:15:18.145723    6450 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-a\"\nI0525 16:15:18.146392    6450 main.go:474] peerClientIPs: [172.20.44.17]\nI0525 16:15:18.529268    6450 certs.go:183] generating certificate for \"etcd-manager-etcd-a\"\nI0525 16:15:18.531180    6450 server.go:105] GRPC server listening on \"172.20.44.17:3996\"\nI0525 16:15:18.531475    6450 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:15:18.616531    6450 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:15:18.657995    6450 peers.go:115] found new candidate peer from discovery: etcd-a [{172.20.44.17 0} {172.20.44.17 0}]\nI0525 16:15:18.658032    6450 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:18.658222    6450 peers.go:295] connecting to peer \"etcd-a\" with TLS policy, servername=\"etcd-manager-server-etcd-a\"\nI0525 16:15:20.531965    6450 controller.go:189] starting controller iteration\nI0525 16:15:20.532372    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:15:20.532544    6450 commands.go:41] refreshing commands\nI0525 16:15:20.532643    6450 s3context.go:334] product_uuid is \"ec2fff1a-06e0-467e-bca7-1d39ac3352bc\", assuming running on EC2\nI0525 16:15:20.534363    6450 s3context.go:166] got region from metadata: \"eu-west-3\"\nI0525 16:15:20.560106    6450 s3context.go:213] found bucket in region \"us-west-1\"\nI0525 16:15:21.154022    6450 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0525 16:15:21.154047    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0525 16:15:31.312475    6450 controller.go:189] starting controller iteration\nI0525 16:15:31.312505    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:31.312779    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:15:31.312934    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:31.313345    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > }\nI0525 16:15:31.313421    6450 controller.go:303] etcd cluster members: map[]\nI0525 16:15:31.313431    6450 controller.go:641] sending member map to all peers: \nI0525 16:15:31.313723    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:31.313756    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:15:31.918663    6450 controller.go:359] detected that there is no existing cluster\nI0525 16:15:31.918676    6450 commands.go:41] refreshing commands\nI0525 16:15:32.153223    6450 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0525 16:15:32.153289    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0525 16:15:32.306382    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:32.306961    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:32.307101    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:32.307271    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:32.307492    6450 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > }]\nI0525 16:15:32.308165    6450 newcluster.go:153] JoinClusterResponse: \nI0525 16:15:32.308925    6450 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0525 16:15:32.309067    6450 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\nI0525 16:15:32.309911    6450 pki.go:59] adding peerClientIPs [172.20.44.17]\nI0525 16:15:32.310013    6450 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[172.20.44.17 127.0.0.1]} Usages:[2 1]}\nI0525 16:15:32.480904    6450 certs.go:183] generating certificate for \"etcd-a\"\nI0525 16:15:32.484785    6450 pki.go:110] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0525 16:15:32.647024    6450 certs.go:183] generating certificate for \"etcd-a\"\nI0525 16:15:32.810131    6450 certs.go:183] generating certificate for \"etcd-a\"\nI0525 16:15:32.812074    6450 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0525 16:15:32.812601    6450 newcluster.go:171] JoinClusterResponse: \nI0525 16:15:32.812655    6450 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0525 16:15:32.812669    6450 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-05-25 16:15:32.818641 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\n2021-05-25 16:15:32.818670 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.crt\n2021-05-25 16:15:32.818746 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:32.818759 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\n2021-05-25 16:15:32.818775 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-05-25 16:15:32.818838 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\n2021-05-25 16:15:32.818847 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\n2021-05-25 16:15:32.818852 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-05-25 16:15:32.818888 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=YDiWJXhaXWSx1pSDaAhTZg\n2021-05-25 16:15:32.818897 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.key\n2021-05-25 16:15:32.818905 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3994\n2021-05-25 16:15:32.818913 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-05-25 16:15:32.818920 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-05-25 16:15:32.818927 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-05-25 16:15:32.818966 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-05-25 16:15:32.818982 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.crt\n2021-05-25 16:15:32.818987 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:32.818994 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.key\n2021-05-25 16:15:32.818998 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/ca.crt\n2021-05-25 16:15:32.819018 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/ca.crt\n2021-05-25 16:15:32.819058 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.819Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.819Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.crt, key = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.820Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3994\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.820Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-a=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"YDiWJXhaXWSx1pSDaAhTZg\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.823Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg/member/snap/db\",\"took\":\"3.021993ms\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.824Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.44.17:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.824Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.44.17:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.829Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"1588356cecf6d48e\",\"cluster-id\":\"fab34f02fca49a8b\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.829Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"1588356cecf6d48e switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.829Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"1588356cecf6d48e became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.829Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 1588356cecf6d48e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.829Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"1588356cecf6d48e became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.829Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"1588356cecf6d48e switched to configuration voters=(1551548813577475214)\"}\n{\"level\":\"warn\",\"ts\":\"2021-05-25T16:15:32.832Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.835Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.838Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"1588356cecf6d48e\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.839Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"1588356cecf6d48e\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.839Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"1588356cecf6d48e switched to configuration voters=(1551548813577475214)\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.839Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"fab34f02fca49a8b\",\"local-member-id\":\"1588356cecf6d48e\",\"added-peer-id\":\"1588356cecf6d48e\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.840Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.crt, key = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.840Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"1588356cecf6d48e\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:32.840Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\nI0525 16:15:33.141024    6450 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:15:33.313426    6450 controller.go:189] starting controller iteration\nI0525 16:15:33.313449    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:33.313806    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:15:33.313933    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:33.314316    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994]\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.729Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"1588356cecf6d48e is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.729Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"1588356cecf6d48e became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.729Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"1588356cecf6d48e received MsgVoteResp from 1588356cecf6d48e at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.729Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"1588356cecf6d48e became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.729Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 1588356cecf6d48e elected leader 1588356cecf6d48e at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.730Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"1588356cecf6d48e\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994]}\",\"request-path\":\"/0/members/1588356cecf6d48e/attributes\",\"cluster-id\":\"fab34f02fca49a8b\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.730Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.731Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"fab34f02fca49a8b\",\"local-member-id\":\"1588356cecf6d48e\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.731Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.731Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.731Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3994\"}\nI0525 16:15:33.751025    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0525 16:15:33.751156    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:15:33.751303    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:33.751570    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:33.751587    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:33.751694    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:33.751821    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:33.751858    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:15:33.909373    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:33.910070    6450 backup.go:134] performing snapshot save to /tmp/405265840/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.915Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.916Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.916Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.916Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:33.917Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0525 16:15:33.918054    6450 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/2021-05-25T16:15:33Z-000001/etcd.backup.gz\"\nI0525 16:15:34.086729    6450 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/2021-05-25T16:15:33Z-000001/_etcd_backup.meta\"\nI0525 16:15:34.255001    6450 backup.go:159] backup complete: name:\"2021-05-25T16:15:33Z-000001\" \nI0525 16:15:34.255458    6450 controller.go:937] backup response: name:\"2021-05-25T16:15:33Z-000001\" \nI0525 16:15:34.255674    6450 controller.go:576] took backup: name:\"2021-05-25T16:15:33Z-000001\" \nI0525 16:15:34.427438    6450 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main: [2021-05-25T16:15:33Z-000001]\nI0525 16:15:34.427461    6450 cleanup.go:166] retaining backup \"2021-05-25T16:15:33Z-000001\"\nI0525 16:15:34.427487    6450 restore.go:98] Setting quarantined state to false\nI0525 16:15:34.428003    6450 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" cluster_name:\"etcd\" > \nI0525 16:15:34.428096    6450 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" cluster_name:\"etcd\" > \nI0525 16:15:34.428123    6450 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\nI0525 16:15:34.429544    6450 etcdprocess.go:131] Waiting for etcd to exit\nI0525 16:15:34.529810    6450 etcdprocess.go:131] Waiting for etcd to exit\nI0525 16:15:34.529828    6450 etcdprocess.go:136] Exited etcd: signal: killed\nI0525 16:15:34.530006    6450 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0525 16:15:34.530168    6450 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0525 16:15:34.530184    6450 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0525 16:15:34.530365    6450 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\nI0525 16:15:34.530478    6450 pki.go:59] adding peerClientIPs [172.20.44.17]\nI0525 16:15:34.530517    6450 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[172.20.44.17 127.0.0.1]} Usages:[2 1]}\nI0525 16:15:34.530805    6450 certs.go:122] existing certificate not valid after 2023-05-25T16:15:32Z; will regenerate\nI0525 16:15:34.530816    6450 certs.go:183] generating certificate for \"etcd-a\"\nI0525 16:15:34.533216    6450 pki.go:110] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0525 16:15:34.533468    6450 certs.go:122] existing certificate not valid after 2023-05-25T16:15:32Z; will regenerate\nI0525 16:15:34.533476    6450 certs.go:183] generating certificate for \"etcd-a\"\nI0525 16:15:34.780318    6450 certs.go:183] generating certificate for \"etcd-a\"\nI0525 16:15:34.782215    6450 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0525 16:15:34.782714    6450 restore.go:116] ReconfigureResponse: \nI0525 16:15:34.783815    6450 controller.go:189] starting controller iteration\nI0525 16:15:34.783840    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:34.784153    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:15:34.784329    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:34.784841    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\n2021-05-25 16:15:34.789397 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\n2021-05-25 16:15:34.789478 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.crt\n2021-05-25 16:15:34.789488 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:34.789526 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\n2021-05-25 16:15:34.789556 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-05-25 16:15:34.789581 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\n2021-05-25 16:15:34.789586 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\n2021-05-25 16:15:34.789596 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-05-25 16:15:34.789603 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=YDiWJXhaXWSx1pSDaAhTZg\n2021-05-25 16:15:34.789608 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.key\n2021-05-25 16:15:34.789635 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4001\n2021-05-25 16:15:34.789644 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-05-25 16:15:34.789651 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-05-25 16:15:34.789659 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-05-25 16:15:34.789668 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-05-25 16:15:34.789680 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.crt\n2021-05-25 16:15:34.789685 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-05-25 16:15:34.789709 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.key\n2021-05-25 16:15:34.789714 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/ca.crt\n2021-05-25 16:15:34.789729 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/ca.crt\n2021-05-25 16:15:34.789741 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.789Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.790Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.790Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.crt, key = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.790Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4001\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.790Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.791Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0f0842ab5764ff968/data/YDiWJXhaXWSx1pSDaAhTZg/member/snap/db\",\"took\":\"129.36µs\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.791Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"fab34f02fca49a8b\",\"local-member-id\":\"1588356cecf6d48e\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.792Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"1588356cecf6d48e switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.792Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"1588356cecf6d48e became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.792Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 1588356cecf6d48e [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-05-25T16:15:34.793Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.794Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.796Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"1588356cecf6d48e\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.797Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.798Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"1588356cecf6d48e switched to configuration voters=(1551548813577475214)\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.798Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"fab34f02fca49a8b\",\"local-member-id\":\"1588356cecf6d48e\",\"added-peer-id\":\"1588356cecf6d48e\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.798Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"fab34f02fca49a8b\",\"local-member-id\":\"1588356cecf6d48e\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.798Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.799Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.crt, key = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0f0842ab5764ff968/pki/YDiWJXhaXWSx1pSDaAhTZg/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.799Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"1588356cecf6d48e\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:34.799Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.792Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"1588356cecf6d48e is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.792Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"1588356cecf6d48e became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.792Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"1588356cecf6d48e received MsgVoteResp from 1588356cecf6d48e at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.792Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"1588356cecf6d48e became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.792Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 1588356cecf6d48e elected leader 1588356cecf6d48e at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.793Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"1588356cecf6d48e\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]}\",\"request-path\":\"/0/members/1588356cecf6d48e/attributes\",\"cluster-id\":\"fab34f02fca49a8b\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-25T16:15:35.794Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4001\"}\nI0525 16:15:35.811392    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:15:35.811484    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:15:35.811651    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:35.811949    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:35.811970    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:35.812132    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:35.812276    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:35.812324    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:15:35.963607    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:35.963794    6450 controller.go:557] controller loop complete\nI0525 16:15:45.964988    6450 controller.go:189] starting controller iteration\nI0525 16:15:45.965017    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:45.966093    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:15:45.966524    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:45.966956    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:15:45.984653    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:15:45.984833    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:15:45.984913    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:45.985319    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:45.985338    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:45.985395    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:45.985540    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:45.985663    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:15:46.588024    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:46.588243    6450 controller.go:557] controller loop complete\nI0525 16:15:56.589894    6450 controller.go:189] starting controller iteration\nI0525 16:15:56.589920    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:56.590278    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:15:56.590497    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:15:56.591136    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:15:56.603067    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:15:56.603546    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:15:56.603708    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:15:56.604321    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:56.604372    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:15:56.604453    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:15:56.604575    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:15:56.604613    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:15:57.210833    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:15:57.211047    6450 controller.go:557] controller loop complete\nI0525 16:16:07.212218    6450 controller.go:189] starting controller iteration\nI0525 16:16:07.212247    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:07.212561    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:16:07.212778    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:07.213263    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:16:07.224537    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:07.224647    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:16:07.224692    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:07.224968    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:07.224985    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:07.225035    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:07.225109    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:07.225121    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:16:07.818096    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:07.818216    6450 controller.go:557] controller loop complete\nI0525 16:16:17.819582    6450 controller.go:189] starting controller iteration\nI0525 16:16:17.819609    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:17.819987    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:16:17.820211    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:17.820783    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:16:17.835551    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:17.835628    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:16:17.835839    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:17.836540    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:17.836552    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:17.836594    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:17.836658    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:17.836666    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:16:18.431482    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:18.431703    6450 controller.go:557] controller loop complete\nI0525 16:16:18.661878    6450 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:16:18.715173    6450 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:16:18.760549    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:18.760628    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:28.433703    6450 controller.go:189] starting controller iteration\nI0525 16:16:28.433766    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:28.434051    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:16:28.434295    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:28.434825    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:16:28.446279    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:28.446356    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:16:28.446482    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:28.446761    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:28.446778    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:28.446853    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:28.446987    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:28.447011    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:16:29.041773    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:29.041846    6450 controller.go:557] controller loop complete\nI0525 16:16:39.043012    6450 controller.go:189] starting controller iteration\nI0525 16:16:39.043038    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:39.043389    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:16:39.043568    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:39.044121    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:16:39.067049    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:39.067404    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:16:39.067455    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:39.067656    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:39.067694    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:39.067764    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:39.067868    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:39.067895    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:16:39.662448    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:39.662641    6450 controller.go:557] controller loop complete\nI0525 16:16:49.664699    6450 controller.go:189] starting controller iteration\nI0525 16:16:49.664726    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:49.665094    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:16:49.665277    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:16:49.665755    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:16:49.682712    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:16:49.682803    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:16:49.682821    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:16:49.683050    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:49.683064    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:16:49.683153    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:16:49.683301    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:16:49.683315    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:16:50.287077    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:16:50.287145    6450 controller.go:557] controller loop complete\nI0525 16:17:00.288340    6450 controller.go:189] starting controller iteration\nI0525 16:17:00.288368    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:00.288627    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:17:00.288789    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:00.289259    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:17:00.304807    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:00.304983    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:17:00.305036    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:00.305375    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:00.305425    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:00.305517    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:00.305685    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:00.305730    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:17:00.911985    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:00.912221    6450 controller.go:557] controller loop complete\nI0525 16:17:10.913424    6450 controller.go:189] starting controller iteration\nI0525 16:17:10.913512    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:10.913849    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:17:10.914151    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:10.914660    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:17:10.926187    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:10.926292    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:17:10.926370    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:10.926626    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:10.926654    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:10.926729    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:10.926865    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:10.926881    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:17:11.517831    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:11.517921    6450 controller.go:557] controller loop complete\nI0525 16:17:18.761823    6450 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0525 16:17:18.814644    6450 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0525 16:17:18.858161    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:18.858317    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:21.519354    6450 controller.go:189] starting controller iteration\nI0525 16:17:21.519382    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:21.519754    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:17:21.519876    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:21.520497    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:17:21.533631    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:21.533709    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:17:21.533725    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:21.534026    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:21.534044    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:21.534154    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:21.534283    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:21.534298    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:17:22.140467    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:22.140558    6450 controller.go:557] controller loop complete\nI0525 16:17:32.142211    6450 controller.go:189] starting controller iteration\nI0525 16:17:32.142256    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:32.142522    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:17:32.142639    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:32.142967    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:17:32.154546    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:32.154649    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:17:32.154667    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:32.154886    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:32.154901    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:32.154980    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:32.155102    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:32.155162    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:17:32.747855    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:32.748083    6450 controller.go:557] controller loop complete\nI0525 16:17:42.749256    6450 controller.go:189] starting controller iteration\nI0525 16:17:42.749281    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:42.749636    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:17:42.749765    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:42.750869    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:17:42.765817    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:42.765986    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:17:42.766153    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:42.766334    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:42.766352    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:42.766406    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:42.766492    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:42.766504    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:17:43.423257    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:43.423328    6450 controller.go:557] controller loop complete\nI0525 16:17:53.425422    6450 controller.go:189] starting controller iteration\nI0525 16:17:53.425450    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:53.425792    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:17:53.425915    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:17:53.426408    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:17:53.439287    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:17:53.439371    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:17:53.439390    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:17:53.439774    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:53.439801    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:17:53.439918    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:17:53.440078    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:17:53.440095    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:17:54.031932    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:17:54.032019    6450 controller.go:557] controller loop complete\nI0525 16:18:04.034002    6450 controller.go:189] starting controller iteration\nI0525 16:18:04.034031    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:18:04.034386    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:18:04.034510    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:18:04.035427    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:18:04.053484    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:04.053573    6450 controller.go:303] etcd cluster members: map[1551548813577475214:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}]\nI0525 16:18:04.053591    6450 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io\" addresses:\"172.20.44.17\" > \nI0525 16:18:04.053784    6450 etcdserver.go:248] updating hosts: map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:04.053797    6450 hosts.go:84] hosts update: primary=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:[172.20.44.17 172.20.44.17]], final=map[172.20.44.17:[etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io]]\nI0525 16:18:04.053854    6450 hosts.go:181] skipping update of unchanged /etc/hosts\nI0525 16:18:04.053934    6450 commands.go:38] not refreshing commands - TTL not hit\nI0525 16:18:04.054034    6450 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-62197fe92d-da63e.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0525 16:18:04.641332    6450 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0525 16:18:04.641404    6450 controller.go:557] controller loop complete\nI0525 16:18:14.642761    6450 controller.go:189] starting controller iteration\nI0525 16:18:14.642934    6450 controller.go:266] Broadcasting leadership assertion with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:18:14.643197    6450 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > leadership_token:\"52LrWjZ0-Jw_MPw1LSUSzw\" healthy:<id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" > > \nI0525 16:18:14.643373    6450 controller.go:295] I am leader with token \"52LrWjZ0-Jw_MPw1LSUSzw\"\nI0525 16:18:14.643819    6450 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001]\nI0525 16:18:14.659366    6450 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\"],\"ID\":\"1551548813577475214\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.44.17:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"YDiWJXhaXWSx1pSDaAhTZg\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-62197fe92d-da63e.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0525 16:18:14.6594