This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-08 02:49
Elapsed34m44s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0908 02:50:03.180962    4098 http.go:37] curl https://storage.googleapis.com/kops-ci/markers/release-1.22/latest-ci-updown-green.txt
I0908 02:50:03.207371    4098 http.go:37] curl https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.22.0-beta.2+v1.22.0-beta.1-38-g85f98ed240/linux/amd64/kops
I0908 02:50:08.299893    4098 up.go:43] Cleaning up any leaked resources from previous cluster
I0908 02:50:08.299928    4098 dumplogs.go:38] /logs/artifacts/4a672587-104f-11ec-816d-469f625e385c/kops toolbox dump --name e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I0908 02:50:08.319481    4118 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 02:50:08.319561    4118 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io" not found
W0908 02:50:08.787382    4098 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0908 02:50:08.787448    4098 down.go:48] /logs/artifacts/4a672587-104f-11ec-816d-469f625e385c/kops delete cluster --name e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --yes
I0908 02:50:08.804898    4128 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 02:50:08.804980    4128 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io" not found
I0908 02:50:09.320283    4098 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/08 02:50:09 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0908 02:50:09.327216    4098 http.go:37] curl https://ip.jsb.workers.dev
I0908 02:50:09.427300    4098 up.go:144] /logs/artifacts/4a672587-104f-11ec-816d-469f625e385c/kops create cluster --name e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.4 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=309956199498/RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2 --channel=alpha --networking=kubenet --container-runtime=containerd --admin-access 35.223.250.100/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0908 02:50:09.446208    4138 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 02:50:09.446292    4138 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0908 02:50:09.468852    4138 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0908 02:50:10.127067    4138 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 41 lines ...

I0908 02:50:35.481687    4098 up.go:181] /logs/artifacts/4a672587-104f-11ec-816d-469f625e385c/kops validate cluster --name e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0908 02:50:35.498825    4157 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 02:50:35.498965    4157 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io

W0908 02:50:36.930417    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0908 02:50:46.977896    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0908 02:50:57.007130    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:51:07.044733    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:51:17.078159    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:51:27.106970    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 02:51:37.139231    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:51:47.170783    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:51:57.201019    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:52:07.238603    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:52:17.285655    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:52:27.317508    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:52:37.365551    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 02:52:47.384151    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0908 02:52:57.563314    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:53:07.599785    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:53:17.629274    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 02:53:27.685641    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:53:37.715413    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:53:47.749276    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:53:57.799077    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 02:54:07.822899    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:54:17.914904    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:54:27.959829    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:54:38.053499    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:54:48.082598    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:54:58.113360    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 02:55:08.196896    4157 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 02:55:18.271769    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 9 lines ...
Machine	i-0bc64cfbdc7348ecd								machine "i-0bc64cfbdc7348ecd" has not yet joined cluster
Node	ip-172-20-56-43.eu-west-3.compute.internal					master "ip-172-20-56-43.eu-west-3.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-5dc785954d-vw7tg						system-cluster-critical pod "coredns-5dc785954d-vw7tg" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-p9hhk					system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-p9hhk" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal" is pending

Validation Failed
W0908 02:55:31.049048    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 7 lines ...
VALIDATION ERRORS
KIND	NAME					MESSAGE
Machine	i-0bc64cfbdc7348ecd			machine "i-0bc64cfbdc7348ecd" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-vw7tg	system-cluster-critical pod "coredns-5dc785954d-vw7tg" is pending
Pod	kube-system/coredns-5dc785954d-zmnzn	system-cluster-critical pod "coredns-5dc785954d-zmnzn" is pending

Validation Failed
W0908 02:55:43.040179    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 36 lines ...
ip-172-20-56-43.eu-west-3.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal" is pending

Validation Failed
W0908 02:56:18.729942    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 6 lines ...
ip-172-20-56-43.eu-west-3.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal" is pending

Validation Failed
W0908 02:56:30.644923    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 6 lines ...
ip-172-20-56-43.eu-west-3.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal" is pending

Validation Failed
W0908 02:56:42.860375    4157 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 568 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 188 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 200 lines ...
STEP: Creating a kubernetes client
Sep  8 02:59:09.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
W0908 02:59:11.224974    4746 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep  8 02:59:11.225: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-a93b6c7d-b21c-4649-90fc-239f1e3ee715
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 02:59:11.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:11.864: INFO: Only supported for providers [vsphere] (not aws)
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 02:59:12.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-3556" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:12.971: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 02:59:13.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-5071" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:13.762: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 02:59:14.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-7678" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":1,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Sep  8 02:59:09.964: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep  8 02:59:09.964: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-dxss
STEP: Creating a pod to test exec-volume-test
Sep  8 02:59:10.071: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-dxss" in namespace "volume-6527" to be "Succeeded or Failed"
Sep  8 02:59:10.175: INFO: Pod "exec-volume-test-inlinevolume-dxss": Phase="Pending", Reason="", readiness=false. Elapsed: 103.756668ms
Sep  8 02:59:12.279: INFO: Pod "exec-volume-test-inlinevolume-dxss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208032636s
Sep  8 02:59:14.387: INFO: Pod "exec-volume-test-inlinevolume-dxss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315959448s
STEP: Saw pod success
Sep  8 02:59:14.387: INFO: Pod "exec-volume-test-inlinevolume-dxss" satisfied condition "Succeeded or Failed"
Sep  8 02:59:14.494: INFO: Trying to get logs from node ip-172-20-36-148.eu-west-3.compute.internal pod exec-volume-test-inlinevolume-dxss container exec-container-inlinevolume-dxss: <nil>
STEP: delete the pod
Sep  8 02:59:14.726: INFO: Waiting for pod exec-volume-test-inlinevolume-dxss to disappear
Sep  8 02:59:14.830: INFO: Pod exec-volume-test-inlinevolume-dxss no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-dxss
Sep  8 02:59:14.831: INFO: Deleting pod "exec-volume-test-inlinevolume-dxss" in namespace "volume-6527"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:15.173: INFO: Only supported for providers [azure] (not aws)
... skipping 94 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  8 02:59:10.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537" in namespace "projected-4465" to be "Succeeded or Failed"
Sep  8 02:59:10.537: INFO: Pod "downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537": Phase="Pending", Reason="", readiness=false. Elapsed: 102.89285ms
Sep  8 02:59:12.640: INFO: Pod "downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206730485s
Sep  8 02:59:14.745: INFO: Pod "downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311127473s
STEP: Saw pod success
Sep  8 02:59:14.745: INFO: Pod "downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537" satisfied condition "Succeeded or Failed"
Sep  8 02:59:14.850: INFO: Trying to get logs from node ip-172-20-49-112.eu-west-3.compute.internal pod downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537 container client-container: <nil>
STEP: delete the pod
Sep  8 02:59:15.079: INFO: Waiting for pod downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537 to disappear
Sep  8 02:59:15.183: INFO: Pod downwardapi-volume-93ff920d-6a8e-4ace-87ba-c1a9da09a537 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.893 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:15.515: INFO: Only supported for providers [gce gke] (not aws)
... skipping 161 lines ...
W0908 02:59:10.482019    4821 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep  8 02:59:10.482: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep  8 02:59:10.796: INFO: Waiting up to 5m0s for pod "downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713" in namespace "downward-api-7715" to be "Succeeded or Failed"
Sep  8 02:59:10.900: INFO: Pod "downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713": Phase="Pending", Reason="", readiness=false. Elapsed: 104.165076ms
Sep  8 02:59:13.005: INFO: Pod "downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209418225s
Sep  8 02:59:15.110: INFO: Pod "downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313767051s
STEP: Saw pod success
Sep  8 02:59:15.110: INFO: Pod "downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713" satisfied condition "Succeeded or Failed"
Sep  8 02:59:15.214: INFO: Trying to get logs from node ip-172-20-36-148.eu-west-3.compute.internal pod downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713 container dapi-container: <nil>
STEP: delete the pod
Sep  8 02:59:15.443: INFO: Waiting for pod downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713 to disappear
Sep  8 02:59:15.547: INFO: Pod downward-api-c94ad54f-2cef-4b9e-b5ff-4cb92a1a8713 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.252 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:15.872: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Sep  8 02:59:13.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep  8 02:59:14.402: INFO: Waiting up to 5m0s for pod "var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76" in namespace "var-expansion-3068" to be "Succeeded or Failed"
Sep  8 02:59:14.506: INFO: Pod "var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76": Phase="Pending", Reason="", readiness=false. Elapsed: 103.582707ms
Sep  8 02:59:16.612: INFO: Pod "var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209869268s
STEP: Saw pod success
Sep  8 02:59:16.612: INFO: Pod "var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76" satisfied condition "Succeeded or Failed"
Sep  8 02:59:16.730: INFO: Trying to get logs from node ip-172-20-36-148.eu-west-3.compute.internal pod var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76 container dapi-container: <nil>
STEP: delete the pod
Sep  8 02:59:16.950: INFO: Waiting for pod var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76 to disappear
Sep  8 02:59:17.053: INFO: Pod var-expansion-0060b3c3-d669-46fc-9d9c-31f146e30f76 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 02:59:17.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3068" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:17.291: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:8.122 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:17.764: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 49 lines ...
Sep  8 02:59:10.278: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-fdb1251a-39ce-47e3-9705-1d2dc6ab619a
STEP: Creating a pod to test consume configMaps
Sep  8 02:59:10.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39" in namespace "configmap-2512" to be "Succeeded or Failed"
Sep  8 02:59:10.795: INFO: Pod "pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39": Phase="Pending", Reason="", readiness=false. Elapsed: 103.377881ms
Sep  8 02:59:12.901: INFO: Pod "pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209770073s
Sep  8 02:59:15.005: INFO: Pod "pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31300257s
Sep  8 02:59:17.108: INFO: Pod "pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416547391s
STEP: Saw pod success
Sep  8 02:59:17.108: INFO: Pod "pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39" satisfied condition "Succeeded or Failed"
Sep  8 02:59:17.211: INFO: Trying to get logs from node ip-172-20-36-200.eu-west-3.compute.internal pod pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 02:59:17.434: INFO: Waiting for pod pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39 to disappear
Sep  8 02:59:17.536: INFO: Pod pod-configmaps-356c66e5-8867-4d81-a0d7-7a12ab3f3f39 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.240 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
W0908 02:59:10.016392    4766 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep  8 02:59:10.016: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep  8 02:59:10.330: INFO: Waiting up to 5m0s for pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7" in namespace "emptydir-8780" to be "Succeeded or Failed"
Sep  8 02:59:10.434: INFO: Pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7": Phase="Pending", Reason="", readiness=false. Elapsed: 103.152072ms
Sep  8 02:59:12.538: INFO: Pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207372498s
Sep  8 02:59:14.642: INFO: Pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311303105s
Sep  8 02:59:16.756: INFO: Pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7": Phase="Running", Reason="", readiness=true. Elapsed: 6.425172889s
Sep  8 02:59:18.886: INFO: Pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555650462s
STEP: Saw pod success
Sep  8 02:59:18.886: INFO: Pod "pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7" satisfied condition "Succeeded or Failed"
Sep  8 02:59:19.025: INFO: Trying to get logs from node ip-172-20-51-126.eu-west-3.compute.internal pod pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7 container test-container: <nil>
STEP: delete the pod
Sep  8 02:59:19.268: INFO: Waiting for pod pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7 to disappear
Sep  8 02:59:19.379: INFO: Pod pod-c011c02c-ea11-46dc-a8a5-09b9b68795f7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.171 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 02:59:19.785: INFO: Only supported for providers [openstack] (not aws)
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Sep  8 02:59:15.755: INFO: Running '/tmp/kubectl3689818531/kubectl --server=https://api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9391 cluster-info dump'
Sep  8 02:59:19.980: INFO: stderr: ""
Sep  8 02:59:19.981: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2041\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-36-148.eu-west-3.compute.internal\",\n                \"uid\": \"5e811f22-fe16-44e6-8983-0e058e7bab4a\",\n                \"resourceVersion\": \"2034\",\n                \"creationTimestamp\": \"2021-09-08T02:55:45Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-36-148.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-0bc64cfbdc7348ecd\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3818792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3716392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:46Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:46Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:59:16Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:45Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:59:16Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:45Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:59:16Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:45Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:59:16Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:46Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.36.148\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.188.144.216\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-36-148.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-36-148.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-188-144-216.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2dd1da383bfbef14ae930b941294d5\",\n                    \"systemUUID\": \"EC2DD1DA-383B-FBEF-14AE-930B941294D5\",\n                    \"bootID\": \"92339691-4ce1-40bb-a597-5246fb6de3be\",\n                    \"kernelVersion\": \"3.10.0-1160.el7.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux Server 7.9 (Maipo)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"uid\": \"5be6f3a5-dd21-4786-b13f-92d8bbfc369b\",\n                \"resourceVersion\": \"738\",\n                \"creationTimestamp\": \"2021-09-08T02:55:30Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-36-200.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-088d4e8dd2388c3a5\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3818792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3716392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:36Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:36Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:00Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:29Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:00Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:29Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:00Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:29Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:00Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:30Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.36.200\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.237.46.246\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-237-46-246.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec275798da5a1181db982eacb222e73f\",\n                    \"systemUUID\": \"EC275798-DA5A-1181-DB98-2EACB222E73F\",\n                    \"bootID\": \"dc087a3d-f310-4e2c-92a2-13d0a49b1700\",\n                    \"kernelVersion\": \"3.10.0-1160.el7.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux Server 7.9 (Maipo)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 15209393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-49-112.eu-west-3.compute.internal\",\n                \"uid\": \"db998c43-75b6-4398-a69d-614993c3677e\",\n                \"resourceVersion\": \"692\",\n                \"creationTimestamp\": \"2021-09-08T02:55:40Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-49-112.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-01b8c5d48e3791963\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3818792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3716392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:46Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:46Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:40Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:40Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:40Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:40Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.49.112\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.36.234.230\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-49-112.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-49-112.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-36-234-230.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2b81982adb90bf1b35a1874df5c4a2\",\n                    \"systemUUID\": \"EC2B8198-2ADB-90BF-1B35-A1874DF5C4A2\",\n                    \"bootID\": \"53dacba6-b7e2-4cff-8399-1f8c822d63d7\",\n                    \"kernelVersion\": \"3.10.0-1160.el7.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux Server 7.9 (Maipo)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"uid\": \"2bdba3e8-3273-4a9f-85c8-e8d02c410652\",\n                \"resourceVersion\": \"763\",\n                \"creationTimestamp\": \"2021-09-08T02:55:40Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-3a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-51-126.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-03f0dffb56925a230\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3818792Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3716392Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:55:46Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:46Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:10Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:10Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:10Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:56:10Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:40Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.51.126\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.181.160.177\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-181-160-177.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2c64344be8a48581db407d2df44deb\",\n                    \"systemUUID\": \"EC2C6434-4BE8-A485-81DB-407D2DF44DEB\",\n                    \"bootID\": \"a09546f0-91f5-4f94-aa30-284e62bc8d44\",\n                    \"kernelVersion\": \"3.10.0-1160.el7.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux Server 7.9 (Maipo)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                \"resourceVersion\": \"465\",\n                \"creationTimestamp\": \"2021-09-08T02:54:04Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-3\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-3a\",\n                    \"kops.k8s.io/instancegroup\": \"master-eu-west-3a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"eu-west-3\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-3a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-3a/i-0b4df721753a52316\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"50319340Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3634476Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"46374303668\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3532076Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:54:36Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:54:34Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:53:57Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:54:34Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:53:57Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:54:34Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:53:57Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-08T02:54:34Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:54:34Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.56.43\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.237.113.83\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-237-113-83.eu-west-3.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2eb5d989b3c8a25fce864d2d240e9e\",\n                    \"systemUUID\": \"EC2EB5D9-89B3-C8A2-5FCE-864D2D240E9E\",\n                    \"bootID\": \"d369a2ff-3768-40bd-868a-3b4e94998752\",\n                    \"kernelVersion\": \"3.10.0-1160.el7.x86_64\",\n                    \"osImage\": \"Red Hat Enterprise Linux Server 7.9 (Maipo)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\"\n                        ],\n                        \"sizeBytes\": 172004323\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64@sha256:f29008c0c91003edb5e5d87c6e7242e31f7bb814af98c7b885e75aa96f5c37de\",\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 126880221\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 121092419\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 114167318\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 113235479\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 51890488\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 25622039\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"334\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b93ab8b5accd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8dc92b84-fea7-4785-8af0-8e60e201c1c3\",\n                \"resourceVersion\": \"81\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"439\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:40Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b94744e952b4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"235369e7-0887-4f00-ba72-f8da66fa7165\",\n                \"resourceVersion\": \"83\",\n                \"creationTimestamp\": \"2021-09-08T02:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"448\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:30Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b94930934863\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"61622c03-47c1-4649-bd60-7ae51b4d0e57\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2021-09-08T02:55:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"591\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-vw7tg to ip-172-20-36-200.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b94956e037a0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b7a9c06a-eb40-4e46-a1f5-3e9943f7871d\",\n                \"resourceVersion\": \"117\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b94a038e313a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a0444600-f8ab-46b6-8649-366f142032f3\",\n                \"resourceVersion\": \"139\",\n                \"creationTimestamp\": \"2021-09-08T02:55:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 2.897067991s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b94a0938416c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4c7fcacc-cdae-4321-add1-ca739f5f7e54\",\n                \"resourceVersion\": \"140\",\n                \"creationTimestamp\": \"2021-09-08T02:55:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg.16a2b94a0f5df52a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4bb1539a-de1f-4080-b1f3-19e1dfa3028b\",\n                \"resourceVersion\": \"142\",\n                \"creationTimestamp\": \"2021-09-08T02:55:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn.16a2b949cc9a4f49\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"497caebb-ef75-4c6d-aff7-c86dd3de92c6\",\n                \"resourceVersion\": \"125\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"649\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-zmnzn to ip-172-20-51-126.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn.16a2b949f27e52de\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"74c479ed-5eb6-44af-b6e9-c26dc261a29c\",\n                \"resourceVersion\": \"189\",\n                \"creationTimestamp\": \"2021-09-08T02:55:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"654\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn.16a2b94a4adaa5c5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0b24af7f-10ab-4d74-98a2-268399e8fca7\",\n                \"resourceVersion\": \"190\",\n                \"creationTimestamp\": \"2021-09-08T02:55:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"654\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 1.482429906s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:43Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn.16a2b94a51b93927\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f1fffb5d-2452-4ccd-b1cf-77eb5e1427bd\",\n                \"resourceVersion\": \"191\",\n                \"creationTimestamp\": \"2021-09-08T02:55:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"654\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:43Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn.16a2b94a57f1b1ba\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a90dad91-0b91-4db9-81a6-169b97a6de49\",\n                \"resourceVersion\": \"192\",\n                \"creationTimestamp\": \"2021-09-08T02:55:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"654\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:44Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn.16a2b94a5a9341bc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"87eaa96f-de82-411c-bc2f-82c24bb51b38\",\n                \"resourceVersion\": \"193\",\n                \"creationTimestamp\": \"2021-09-08T02:55:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"654\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Readiness probe failed: HTTP probe failed with statuscode: 503\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:44Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:44Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16a2b93ab74759a9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0bcb3e85-54d8-4b2f-8357-2393adca56ea\",\n                \"resourceVersion\": \"66\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"6a9b7235-29b4-4a6d-93a3-8086486ac4f9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"404\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-vw7tg\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16a2b949cbe00951\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f4ff39dd-3d64-4696-94fd-5914a81fc337\",\n                \"resourceVersion\": \"124\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"6a9b7235-29b4-4a6d-93a3-8086486ac4f9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"647\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-zmnzn\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b93ab78cc644\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"81833f00-9d33-4b87-b806-3622ab2b54e1\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"437\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:40Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b947435e90ed\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1402819f-b9d2-47e6-8a37-d5109cb02027\",\n                \"resourceVersion\": \"82\",\n                \"creationTimestamp\": \"2021-09-08T02:55:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:30Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:30Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b949303d2a26\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"630c9371-354c-43ae-a03f-c6ad7ee7c924\",\n                \"resourceVersion\": \"105\",\n                \"creationTimestamp\": \"2021-09-08T02:55:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"589\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-84d4cfd89c-p9hhk to ip-172-20-36-200.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b94956d86310\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"374febd9-8981-478d-a26e-f0392b7b4269\",\n                \"resourceVersion\": \"116\",\n                \"creationTimestamp\": \"2021-09-08T02:55:40Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"620\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b949ac7fe613\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"db567010-2744-42bd-87ca-0ab0a1eb5b1f\",\n                \"resourceVersion\": \"120\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"620\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\" in 1.437027443s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b949b4dce880\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"52c5255f-78f9-4809-a765-b0cd27ee494e\",\n                \"resourceVersion\": \"121\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"620\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk.16a2b949bad4b561\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0fc84857-2868-4394-9140-d476c8ed1dd3\",\n                \"resourceVersion\": \"123\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"620\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c.16a2b93ab717f01a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4c7cba65-7bb1-4a3a-8a5f-3ef7852d9b51\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"uid\": \"f25f8884-f9b3-4be0-afc9-4bf4df481ec2\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"403\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-84d4cfd89c-p9hhk\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16a2b93a9c896689\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"38bbc24e-6877-42b7-87b4-2a0dafacaa8f\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"df822c84-b22b-4ccc-8e9f-cfc16d54bea4\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"240\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-84d4cfd89c to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16a2b93a9ccf91a7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2f925fbc-06a9-46ac-b0e4-df249f95c8ad\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"c3cd11cc-a8a6-4a16-a4b9-50b3195b644e\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"233\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16a2b949cb120063\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"af122835-c42f-4df5-875f-2a636762bd7f\",\n                \"resourceVersion\": \"122\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"c3cd11cc-a8a6-4a16-a4b9-50b3195b644e\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"646\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-jp8tb.16a2b93ab990fca5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3558e26-60c7-4f3e-8489-5be7182b1298\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-jp8tb\",\n                \"uid\": \"3edd1abb-4144-4a63-9180-c7422252d68b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"438\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-59b7d7865d-jp8tb to ip-172-20-56-43.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-jp8tb.16a2b93ad570bdc6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"75f661a3-8201-4bb1-a1cc-27f95469dfc6\",\n                \"resourceVersion\": \"71\",\n                \"creationTimestamp\": \"2021-09-08T02:54:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-jp8tb\",\n                \"uid\": \"3edd1abb-4144-4a63-9180-c7422252d68b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"449\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-jp8tb.16a2b93ad70517c2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1e1dbfb1-1e30-4ca7-8d94-25196d51ab06\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-09-08T02:54:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-jp8tb\",\n                \"uid\": \"3edd1abb-4144-4a63-9180-c7422252d68b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"449\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-jp8tb.16a2b93adf4c9f5d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8325d345-7267-4f0b-b953-f5d8c2050fd4\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-09-08T02:54:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-jp8tb\",\n                \"uid\": \"3edd1abb-4144-4a63-9180-c7422252d68b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"449\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d.16a2b93ab74cea66\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"45e78366-6d2f-4bc5-8c4f-c48cce5acbc2\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d\",\n                \"uid\": \"589127aa-e7a1-4d9a-ae8b-4a4d533c9fe6\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"405\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-59b7d7865d-jp8tb\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16a2b93a9cd3f7d3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b52148cc-d894-4bb8-897f-99b5239b2f67\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"8b31f319-f979-4445-a034-797f98bb88c5\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"246\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-59b7d7865d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal.16a2b9292e3a4144\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"19f3e0df-e938-465b-9f64-d229346812aa\",\n                \"resourceVersion\": \"19\",\n                \"creationTimestamp\": \"2021-09-08T02:54:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"e7230eaa1c7c77446fb6a703f28db2e7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92cfb6c0442\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6d735e4e-1cb6-4ae3-be5b-e155071a52e8\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-09-08T02:54:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"e7230eaa1c7c77446fb6a703f28db2e7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 16.3274779s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92d41c2ad13\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3297d6d6-5a21-403b-a3ba-05bd30f15b21\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-09-08T02:54:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"e7230eaa1c7c77446fb6a703f28db2e7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92d4b3bf7b5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c253326d-f249-43f4-a16c-d9ebe2318c1c\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-09-08T02:54:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"e7230eaa1c7c77446fb6a703f28db2e7\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal.16a2b929311359a2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2b63ebba-3975-4faf-a057-609224e7888f\",\n                \"resourceVersion\": \"20\",\n                \"creationTimestamp\": \"2021-09-08T02:54:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"2cc1dc3fa8b1012e62d8abe98d85888c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92d41842961\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ee2e57be-7c7b-48c0-af3d-b71dd3aa8086\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-09-08T02:54:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"2cc1dc3fa8b1012e62d8abe98d85888c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 17.455684367s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92d439f0068\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"77453d16-ca00-40d0-a1c1-b29b68c6f6b3\",\n                \"resourceVersion\": \"39\",\n                \"creationTimestamp\": \"2021-09-08T02:54:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"2cc1dc3fa8b1012e62d8abe98d85888c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92d4b3165bd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"45243e63-6f9d-40cb-8f3b-eaa2744d7d3f\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2021-09-08T02:54:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"2cc1dc3fa8b1012e62d8abe98d85888c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-44znn.16a2b93a98075996\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2895b96c-8917-4218-897f-b74dc66316c5\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-44znn\",\n                \"uid\": \"6f33f969-879a-43f3-9490-d9ac24c5b5ba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"399\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-44znn to ip-172-20-56-43.eu-west-3.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-44znn.16a2b93aa7cbc7ac\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d415df18-64b7-4cf7-90aa-cdd00d216292\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-44znn\",\n                \"uid\": \"6f33f969-879a-43f3-9490-d9ac24c5b5ba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"401\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"kube-api-access-xfszs\\\" : configmap \\\"kube-root-ca.crt\\\" not found\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-44znn.16a2b93ad8c7c729\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d7d9ae55-3910-4439-9359-a88977418e41\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2021-09-08T02:54:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-44znn\",\n                \"uid\": \"6f33f969-879a-43f3-9490-d9ac24c5b5ba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"401\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-44znn.16a2b93adb195f60\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"89702214-7542-4b71-a897-4bce794fdfc9\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-09-08T02:54:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-44znn\",\n                \"uid\": \"6f33f969-879a-43f3-9490-d9ac24c5b5ba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"401\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-44znn.16a2b93ae26dcec4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fcb81a74-d0c3-417c-8f20-ddae94880396\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-09-08T02:54:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-44znn\",\n                \"uid\": \"6f33f969-879a-43f3-9490-d9ac24c5b5ba\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"401\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16a2b93b193450e8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"705b6fc8-1e81-4df5-b516-52e053a0c886\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-09-08T02:54:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"fa4aa89f-26fc-4939-ad3b-0d8ddf1861d5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"460\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-56-43.eu-west-3.compute.internal_b38abbeb-1904-47fb-ae1c-35dcefba1fd6 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-56-43.eu-west-3.compute.internal_b38abbeb-1904-47fb-ae1c-35dcefba1fd6\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:38Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16a2b93a96eb53e2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0230c7e7-b291-4895-939c-2474347b4a85\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"f91d23fe-65be-4fb8-a26a-757fd2bb4003\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"220\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-44znn\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b9292dc0a8de\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"40b19d60-ecf8-4f59-93f5-e0336f164dca\",\n                \"resourceVersion\": \"18\",\n                \"creationTimestamp\": \"2021-09-08T02:54:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b072506a5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c6b0c6f3-ae01-49bc-83de-34d59bd683d7\",\n                \"resourceVersion\": \"28\",\n                \"creationTimestamp\": \"2021-09-08T02:54:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\" in 7.942185379s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b0f744292\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d7f7027d-526f-480a-9749-369d247bee40\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-09-08T02:54:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:51Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b1d11311a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"19217737-0e8c-44a6-ad13-ed019d1079bc\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-09-08T02:54:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:51Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b1d19a311\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3482376a-904a-4851-ab1b-f72ba8f9cb5f\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-09-08T02:54:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b27949902\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5173fcdf-b7e7-4433-90c1-268e58cd0719\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-09-08T02:54:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:30Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b6f211766\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bbafe162-dbf9-44e2-84b5-90206e44b7a4\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-09-08T02:54:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:31Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal.16a2b930215b793d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ad668fdc-7c25-48b2-ac2f-ab2ece30beed\",\n                \"resourceVersion\": \"45\",\n                \"creationTimestamp\": \"2021-09-08T02:54:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:51Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal.16a2b9293af90cf8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e2c629fa-f718-418c-9ee3-e20a36a5550a\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-09-08T02:54:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"c6fc018e1a7e016998b01becb04eb833\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:21Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b09093447\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9cfa8eff-a84c-4d51-b7ba-f69d15de5743\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-09-08T02:54:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"c6fc018e1a7e016998b01becb04eb833\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:21Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92b1325c3e4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b8bb541b-1dc3-45d1-964b-828b71dac0c8\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-09-08T02:54:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"c6fc018e1a7e016998b01becb04eb833\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:29Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:21Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal.16a2b93282d7da79\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d6644b86-c73c-480d-a81b-e98c13046579\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-09-08T02:54:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"c6fc018e1a7e016998b01becb04eb833\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Liveness probe failed: Get \\\"https://127.0.0.1:10257/healthz\\\": read tcp 127.0.0.1:39116-\\u003e127.0.0.1:10257: read: connection reset by peer\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:01Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:01Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal.16a2b932b24a1194\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"618d2b21-928f-4b83-8c1f-9f4680dc5143\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-09-08T02:54:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"c6fc018e1a7e016998b01becb04eb833\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"BackOff\",\n            \"message\": \"Back-off restarting failed container\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:02Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:08Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16a2b9374c3ce35c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5e1b0dfb-d444-458a-b8b3-69e3bd0794ca\",\n                \"resourceVersion\": \"55\",\n                \"creationTimestamp\": \"2021-09-08T02:54:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"2e8c12cc-9e16-4b62-892e-1ed9ba1e7a54\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"263\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-56-43.eu-west-3.compute.internal_3642bc9d-b3b3-4e28-b708-a58670d9c57a became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:22Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.16a2b93a7b08d86d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"65b3f05e-ef7b-45dc-aefb-cf27ce36410b\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"6cce27d5-75f4-48eb-bb68-6955af9ca695\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"236\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:35Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:36Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal.16a2b944027b0639\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"aca68452-847c-4463-a0b9-688209a12aaf\",\n                \"resourceVersion\": \"209\",\n                \"creationTimestamp\": \"2021-09-08T02:55:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal\",\n                \"uid\": \"2aa857d5c134d1cb3ebf26e969d16a08\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-148.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:16Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal.16a2b9440501de7a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6a04f0b4-192d-44dd-8ec8-d6484ae3a4eb\",\n                \"resourceVersion\": \"210\",\n                \"creationTimestamp\": \"2021-09-08T02:55:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal\",\n                \"uid\": \"2aa857d5c134d1cb3ebf26e969d16a08\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-148.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:16Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal.16a2b9440a4baa43\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f5e55f3-4464-4c3c-9207-51cd0c2ac738\",\n                \"resourceVersion\": \"211\",\n                \"creationTimestamp\": \"2021-09-08T02:55:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal\",\n                \"uid\": \"2aa857d5c134d1cb3ebf26e969d16a08\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-148.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:16Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal.16a2b94057085f88\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7bf400a7-3bd3-4a90-84e1-c3f056752cc4\",\n                \"resourceVersion\": \"101\",\n                \"creationTimestamp\": \"2021-09-08T02:55:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"uid\": \"aa3a51376ce0f88669bd30458cd4e672\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:01Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal.16a2b940595908f4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4680ac00-6050-41aa-bf31-b7a90feb3701\",\n                \"resourceVersion\": \"102\",\n                \"creationTimestamp\": \"2021-09-08T02:55:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"uid\": \"aa3a51376ce0f88669bd30458cd4e672\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:01Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal.16a2b9405f6f4598\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ebe7c669-ca65-4566-98ba-4f4382ec0281\",\n                \"resourceVersion\": \"103\",\n                \"creationTimestamp\": \"2021-09-08T02:55:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"uid\": \"aa3a51376ce0f88669bd30458cd4e672\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-200.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:01Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal.16a2b942c96b756d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b2f38333-65ec-495f-b844-b6657e333776\",\n                \"resourceVersion\": \"146\",\n                \"creationTimestamp\": \"2021-09-08T02:55:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal\",\n                \"uid\": \"8049007ca81c3ddcfc26fc24935e9bd5\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-49-112.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal.16a2b942cb9fe367\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"114bc3ae-e0cf-4eff-8650-6832df47e9c9\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2021-09-08T02:55:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal\",\n                \"uid\": \"8049007ca81c3ddcfc26fc24935e9bd5\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-49-112.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal.16a2b942d47f3915\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3db64cd9-d9e6-492d-a762-0bb2d11f364f\",\n                \"resourceVersion\": \"148\",\n                \"creationTimestamp\": \"2021-09-08T02:55:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal\",\n                \"uid\": \"8049007ca81c3ddcfc26fc24935e9bd5\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-49-112.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal.16a2b942c0c876e7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e2cbfbda-1098-4abc-84f6-cea59c0cbd2e\",\n                \"resourceVersion\": \"175\",\n                \"creationTimestamp\": \"2021-09-08T02:55:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"uid\": \"545c7dd8f5042d8c547eda8748d9f873\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal.16a2b942c393cbb1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6c56f22b-1f6f-4407-bb65-5d9b6d61e366\",\n                \"resourceVersion\": \"176\",\n                \"creationTimestamp\": \"2021-09-08T02:55:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"uid\": \"545c7dd8f5042d8c547eda8748d9f873\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal.16a2b942ce0aa5ad\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e895d2cd-1dc0-42be-a0c2-88ca4f3e6e98\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2021-09-08T02:55:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"uid\": \"545c7dd8f5042d8c547eda8748d9f873\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-51-126.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"lastTimestamp\": \"2021-09-08T02:55:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal.16a2b929279ccd7e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"356c65b9-a3cd-4253-b8f2-194bc842f746\",\n                \"resourceVersion\": \"17\",\n                \"creationTimestamp\": \"2021-09-08T02:54:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"aee5a60ea74b5fe4eaaf551b14d85ff2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92a5f77fa98\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"985e694d-0326-41c4-afd2-fa076837f5e5\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-09-08T02:54:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"aee5a60ea74b5fe4eaaf551b14d85ff2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:26Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92a7d81d778\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"823f3c95-f1b3-4f9f-b69c-268c76c899f3\",\n                \"resourceVersion\": \"27\",\n                \"creationTimestamp\": \"2021-09-08T02:54:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"aee5a60ea74b5fe4eaaf551b14d85ff2\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:27Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92931fdea9b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"98ef5de3-10f4-4049-8698-3d263a2a43f2\",\n                \"resourceVersion\": \"21\",\n                \"creationTimestamp\": \"2021-09-08T02:54:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"fa2b43aec7f242d6d5862430b4bee203\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92a4c7821b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a86f84fb-47a0-4059-96e9-3e50d6d9f93e\",\n                \"resourceVersion\": \"23\",\n                \"creationTimestamp\": \"2021-09-08T02:54:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"fa2b43aec7f242d6d5862430b4bee203\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:26Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal.16a2b92a62dcc684\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6aafc7f5-33ec-4bd6-8441-a63140c28112\",\n                \"resourceVersion\": \"26\",\n                \"creationTimestamp\": \"2021-09-08T02:54:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"uid\": \"fa2b43aec7f242d6d5862430b4bee203\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-43.eu-west-3.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:53:26Z\",\n            \"lastTimestamp\": \"2021-09-08T02:53:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16a2b936c7711a4f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"457f9b95-afb4-4fbf-925b-402d932e1de7\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-09-08T02:54:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"e0d42a73-f9c2-40aa-9eb4-21cf7b95dd6b\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"259\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-56-43.eu-west-3.compute.internal_ac290fa5-ab45-4d50-a77f-bfba50ccdac4 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-08T02:54:19Z\",\n            \"lastTimestamp\": \"2021-09-08T02:54:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2092\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2104\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"05e46510-b131-468e-be33-2b682a783dae\",\n                \"resourceVersion\": \"235\",\n                \"creationTimestamp\": \"2021-09-08T02:54:09Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2112\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f91d23fe-65be-4fb8-a26a-757fd2bb4003\",\n                \"resourceVersion\": \"461\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-08T02:54:08Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2123\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c3cd11cc-a8a6-4a16-a4b9-50b3195b644e\",\n                \"resourceVersion\": \"716\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-09-08T02:54:09Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-08T02:55:43Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:43Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-08T02:55:51Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-5dc785954d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"df822c84-b22b-4ccc-8e9f-cfc16d54bea4\",\n                \"resourceVersion\": \"659\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-08T02:54:09Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-08T02:55:41Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:55:41Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-08T02:55:41Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-84d4cfd89c\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8b31f319-f979-4445-a034-797f98bb88c5\",\n                \"resourceVersion\": \"462\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-08T02:54:10Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-08T02:54:38Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:54:38Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-08T02:54:38Z\",\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-59b7d7865d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2140\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6a9b7235-29b4-4a6d-93a3-8086486ac4f9\",\n                \"resourceVersion\": \"715\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"c3cd11cc-a8a6-4a16-a4b9-50b3195b644e\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"5dc785954d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"5dc785954d\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f25f8884-f9b3-4be0-afc9-4bf4df481ec2\",\n                \"resourceVersion\": \"658\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"df822c84-b22b-4ccc-8e9f-cfc16d54bea4\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"84d4cfd89c\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"84d4cfd89c\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"589127aa-e7a1-4d9a-ae8b-4a4d533c9fe6\",\n                \"resourceVersion\": \"458\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"59b7d7865d\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"8b31f319-f979-4445-a034-797f98bb88c5\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"59b7d7865d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"59b7d7865d\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2151\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-vw7tg\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8a2d0ab5-240f-40ef-a9ce-ff6e5be41b36\",\n                \"resourceVersion\": \"669\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"6a9b7235-29b4-4a6d-93a3-8086486ac4f9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-9z6fq\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-9z6fq\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:39Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:43Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:43Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:39Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.200\",\n                \"podIP\": \"100.96.1.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.3\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:55:39Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:42Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://427caa50b2b006c312950b3ee4ef0db6af15a160787d439161da590b27dc1b30\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-zmnzn\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41a71a54-2d28-4885-a955-f01dd5a60fbb\",\n                \"resourceVersion\": \"711\",\n                \"creationTimestamp\": \"2021-09-08T02:55:41Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"6a9b7235-29b4-4a6d-93a3-8086486ac4f9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-bvknv\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-bvknv\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:41Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:51Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:51Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:41Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.51.126\",\n                \"podIP\": \"100.96.2.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.2.2\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:55:41Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:44Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://3549c1562e049d7b2c3cc7f52805b00b13947faa80e9f6c858b7d948b0094f4b\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-p9hhk\",\n                \"generateName\": \"coredns-autoscaler-84d4cfd89c-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d2adafee-40d4-4fbb-9c44-e63fc005842d\",\n                \"resourceVersion\": \"657\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                        \"uid\": \"f25f8884-f9b3-4be0-afc9-4bf4df481ec2\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-rfc5r\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-rfc5r\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:39Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:41Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:41Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:39Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.200\",\n                \"podIP\": \"100.96.1.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.2\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:55:39Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:41Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                        \"containerID\": \"containerd://e8ed0c6f4cf3df05b1fb98c1a0dd8f9a418a240fb374acbc6aa07b846ce430b2\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-jp8tb\",\n                \"generateName\": \"dns-controller-59b7d7865d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3edd1abb-4144-4a63-9180-c7422252d68b\",\n                \"resourceVersion\": \"457\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"59b7d7865d\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-59b7d7865d\",\n                        \"uid\": \"589127aa-e7a1-4d9a-ae8b-4a4d533c9fe6\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-cgbkn\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-cgbkn\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:38Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:38Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:54:36Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:54:37Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                        \"imageID\": \"sha256:2babc7e3f10c2c20ad7fd8cc592d6de686b248a7801de5192198db8ca008ec60\",\n                        \"containerID\": \"containerd://490a12a6e789468c2ef5c1967d6ab5fda28786efe5ecbd60dedc3fc8c209a298\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"96105768-908e-43cf-8ef7-24f5d3b4ef0e\",\n                \"resourceVersion\": \"549\",\n                \"creationTimestamp\": \"2021-09-08T02:55:06Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"e7230eaa1c7c77446fb6a703f28db2e7\",\n                    \"kubernetes.io/config.mirror\": \"e7230eaa1c7c77446fb6a703f28db2e7\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:53:03.322355854Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                        \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --grpc-port=3997 --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:39Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:39Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:53:04Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:53:39Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                        \"containerID\": \"containerd://2f35b730690b0230bbb81709421a14b6a0145f584818c7c34c7300b4f323f688\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5abe47c2-ccff-47d0-b919-437716df2977\",\n                \"resourceVersion\": \"550\",\n                \"creationTimestamp\": \"2021-09-08T02:55:08Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"2cc1dc3fa8b1012e62d8abe98d85888c\",\n                    \"kubernetes.io/config.mirror\": \"2cc1dc3fa8b1012e62d8abe98d85888c\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:53:03.322357630Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                        \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io --grpc-port=3996 --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:39Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:39Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:53:04Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:53:39Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                        \"containerID\": \"containerd://608b4adccf5a6633c98c908c5d44b41358891a440d82a5ba7ba17dcded5264e2\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-44znn\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6f33f969-879a-43f3-9490-d9ac24c5b5ba\",\n                \"resourceVersion\": \"459\",\n                \"creationTimestamp\": \"2021-09-08T02:54:36Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"6c875f66c8\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"f91d23fe-65be-4fb8-a26a-757fd2bb4003\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-xfszs\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-xfszs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-56-43.eu-west-3.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:38Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:38Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:36Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:54:36Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:54:37Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                        \"imageID\": \"sha256:62287707a8723aba9b071745df906504f59d9c9a340a0224903ada29be5f0d91\",\n                        \"containerID\": \"containerd://535d0c41b0ed4f6875032fbf5636b4ffb2a84e68551105e3893aca8f4e9a36f6\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ed57a76f-f4d3-42e5-b8ab-857d9d5b67ae\",\n                \"resourceVersion\": \"548\",\n                \"creationTimestamp\": \"2021-09-08T02:55:07Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                    \"kubectl.kubernetes.io/default-container\": \"kube-apiserver\",\n                    \"kubernetes.io/config.hash\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                    \"kubernetes.io/config.mirror\": \"6eb91c7695f2cab718b4fb928c30a90e\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:53:03.307343333Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                        \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubernetesca\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/ca.crt\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkapi\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-apiserver\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/srv/kubernetes/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/srv/kubernetes/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/srv/kubernetes/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kube-apiserver/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kube-apiserver/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/kube-apiserver/service-account.pub\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/kube-apiserver/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-apiserver/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-apiserver/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"kubernetesca\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/ca.crt\"\n                            },\n                            {\n                                \"name\": \"srvkapi\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:52Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:52Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:53:04Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:53:31Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\",\n                        \"imageID\": \"sha256:d87f873ca639a672612d5da5e4d1a77910a7605c9c830937982f9bb05206d0c8\",\n                        \"containerID\": \"containerd://992637a8e75c425f70ffcf01525037fb3088494deeee70852a70d4148df76d07\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:53:51Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-09-08T02:53:29Z\",\n                                \"finishedAt\": \"2021-09-08T02:53:50Z\",\n                                \"containerID\": \"containerd://7def00f508b909b603838b961a4452285490ad9ff61a79bb977549eea873804b\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\",\n                        \"imageID\": \"k8s.gcr.io/kube-apiserver-amd64@sha256:f29008c0c91003edb5e5d87c6e7242e31f7bb814af98c7b885e75aa96f5c37de\",\n                        \"containerID\": \"containerd://35bcb6328975fa83e6f8c499196abca771511b143906042d1c57119f16503e27\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0b690622-9807-4772-9b21-6b18a0b76c69\",\n                \"resourceVersion\": \"607\",\n                \"creationTimestamp\": \"2021-09-08T02:55:25Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"c6fc018e1a7e016998b01becb04eb833\",\n                    \"kubernetes.io/config.mirror\": \"c6fc018e1a7e016998b01becb04eb833\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:53:03.322347379Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                        \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cabundle\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/ca.crt\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--authentication-kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--authorization-kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/kube-controller-manager/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/kube-controller-manager/ca.key\",\n                            \"--configure-cloud-routes=true\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/kube-controller-manager/service-account.key\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-controller-manager/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-controller-manager/server.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"cabundle\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/ca.crt\"\n                            },\n                            {\n                                \"name\": \"srvkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:22Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:22Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:53:04Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:54:21Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-09-08T02:53:41Z\",\n                                \"finishedAt\": \"2021-09-08T02:54:01Z\",\n                                \"containerID\": \"containerd://fc593f5a34a92e59d160daa9ffaf28ea5d9fd7f3c2e4439bd852334e4f15b87a\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 2,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:2c25d0f89db7a9dba5ed71b692b65e86b0ad9fcab1a9f94e946c05db18776ab3\",\n                        \"containerID\": \"containerd://ae828ccc694605fca35d82037b930ff8f77acbe6b0f355b44fce5f833bd3e0c6\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-148.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f01da170-643a-4b7c-b8f9-767ed20e1ac3\",\n                \"resourceVersion\": \"862\",\n                \"creationTimestamp\": \"2021-09-08T02:56:37Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"2aa857d5c134d1cb3ebf26e969d16a08\",\n                    \"kubernetes.io/config.mirror\": \"2aa857d5c134d1cb3ebf26e969d16a08\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:55:14.909690143Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-148.eu-west-3.compute.internal\",\n                        \"uid\": \"5e811f22-fe16-44e6-8983-0e058e7bab4a\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-36-148.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-148.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:15Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:17Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:17Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:15Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.148\",\n                \"podIP\": \"172.20.36.148\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.148\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:55:15Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:16Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://589180a7734ab194214adae6c79c652693074247ff79d6f20613d82e7db3966e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b7f84c55-d1c0-47bc-a06a-643e3b8e6d67\",\n                \"resourceVersion\": \"791\",\n                \"creationTimestamp\": \"2021-09-08T02:56:10Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"aa3a51376ce0f88669bd30458cd4e672\",\n                    \"kubernetes.io/config.mirror\": \"aa3a51376ce0f88669bd30458cd4e672\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:54:59.423870151Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-200.eu-west-3.compute.internal\",\n                        \"uid\": \"5be6f3a5-dd21-4786-b13f-92d8bbfc369b\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-36-200.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-200.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:59Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:54:59Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.200\",\n                \"podIP\": \"172.20.36.200\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.200\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:54:59Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:01Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://70cb5e2c6dedabe927b6ac25fd17e740d5f267f38de37585505b44485abb6de2\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-49-112.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f2d95a87-de69-42de-acd5-8f1fb5fff7c7\",\n                \"resourceVersion\": \"875\",\n                \"creationTimestamp\": \"2021-09-08T02:56:43Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"8049007ca81c3ddcfc26fc24935e9bd5\",\n                    \"kubernetes.io/config.mirror\": \"8049007ca81c3ddcfc26fc24935e9bd5\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:55:09.805334599Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-49-112.eu-west-3.compute.internal\",\n                        \"uid\": \"db998c43-75b6-4398-a69d-614993c3677e\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-49-112.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-49-112.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:10Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:10Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.49.112\",\n                \"podIP\": \"172.20.49.112\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.49.112\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:55:10Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:11Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://418fd75e91ea9c96a3dc25c9403fe261f4862b796f0e1cf0b89e9a40f3f6d93e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"08aba115-5788-4c07-8efa-cbb8d2f770ff\",\n                \"resourceVersion\": \"846\",\n                \"creationTimestamp\": \"2021-09-08T02:56:30Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"545c7dd8f5042d8c547eda8748d9f873\",\n                    \"kubernetes.io/config.mirror\": \"545c7dd8f5042d8c547eda8748d9f873\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:55:09.755902843Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-51-126.eu-west-3.compute.internal\",\n                        \"uid\": \"2bdba3e8-3273-4a9f-85c8-e8d02c410652\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-51-126.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-51-126.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:10Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:55:10Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.51.126\",\n                \"podIP\": \"172.20.51.126\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.51.126\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:55:10Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:55:11Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://487e85cd2b199368094a0df096c4a29a7c5017d86a8dd22d9fec6b37bb9db275\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ddbaeb6a-7e46-4ebc-91d3-50c15df46938\",\n                \"resourceVersion\": \"344\",\n                \"creationTimestamp\": \"2021-09-08T02:54:31Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"aee5a60ea74b5fe4eaaf551b14d85ff2\",\n                    \"kubernetes.io/config.mirror\": \"aee5a60ea74b5fe4eaaf551b14d85ff2\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:53:03.322352217Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                        \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-56-43.eu-west-3.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://127.0.0.1\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:27Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:27Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:53:04Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:53:27Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://a49b5bedd92efd4a13f5e2d3a5a7db7fac4d48fedf62dc34d86d41f606029fdc\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"62d4ae07-f0ad-4fb0-81dc-1ef71db1e4a9\",\n                \"resourceVersion\": \"522\",\n                \"creationTimestamp\": \"2021-09-08T02:54:58Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"fa2b43aec7f242d6d5862430b4bee203\",\n                    \"kubernetes.io/config.mirror\": \"fa2b43aec7f242d6d5862430b4bee203\",\n                    \"kubernetes.io/config.seen\": \"2021-09-08T02:53:03.322354274Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                        \"uid\": \"ca6d0760-5ed3-4072-adb7-df925f431870\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvscheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--authentication-kubeconfig=/var/lib/kube-scheduler/kubeconfig\",\n                            \"--authorization-kubeconfig=/var/lib/kube-scheduler/kubeconfig\",\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-scheduler/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-scheduler/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"srvscheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-43.eu-west-3.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:27Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:27Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-08T02:53:04Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.43\",\n                \"podIP\": \"172.20.56.43\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.43\"\n                    }\n                ],\n                \"startTime\": \"2021-09-08T02:53:04Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-08T02:53:26Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:993d3ec13feb2e7b7e9bd6ac4831fb0cdae7329a8e8f1e285d9f2790004b2fe3\",\n                        \"containerID\": \"containerd://76eaad3055de69eb2298a85d9b9729a87a613c48dd6e75aeea4448c4dda4b273\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-vw7tg ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-vw7tg ====\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-zmnzn ====\n[INFO] plugin/ready: Still waiting on: \"kubernetes\"\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-zmnzn ====\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-p9hhk ====\nI0908 02:55:41.384184       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI0908 02:55:41.638361       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI0908 02:55:41.641697       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI0908 02:55:41.641717       1 plugin.go:50] Set control mode to linear\nI0908 02:55:41.641727       1 linear_controller.go:60] ConfigMap version change (old:  new: 645) - rebuilding params\nI0908 02:55:41.641733       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI0908 02:55:41.642220       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI0908 02:55:41.644591       1 k8sclient.go:272] Cluster status: SchedulableNodes[4], SchedulableCores[8]\nI0908 02:55:41.644605       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-p9hhk ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-59b7d7865d-jp8tb ====\ndns-controller version 0.1\nI0908 02:54:37.824847       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI0908 02:54:37.824905       1 main.go:223] Ingress controller disabled\nI0908 02:54:37.824935       1 dnscontroller.go:108] starting DNS controller\nI0908 02:54:37.825095       1 pod.go:60] starting pod controller\nI0908 02:54:37.825486       1 dnscontroller.go:170] scope not yet ready: service\nI0908 02:54:37.825501       1 service.go:60] starting service controller\nI0908 02:54:37.825629       1 node.go:60] starting node controller\nI0908 02:54:37.856139       1 dnscontroller.go:625] Update desired state: node/ip-172-20-56-43.eu-west-3.compute.internal: [{A node/ip-172-20-56-43.eu-west-3.compute.internal/internal 172.20.56.43 true} {A node/ip-172-20-56-43.eu-west-3.compute.internal/external 15.237.113.83 true} {A node/role=master/internal 172.20.56.43 true} {A node/role=master/external 15.237.113.83 true} {A node/role=master/ ip-172-20-56-43.eu-west-3.compute.internal true} {A node/role=master/ ip-172-20-56-43.eu-west-3.compute.internal true} {A node/role=master/ ec2-15-237-113-83.eu-west-3.compute.amazonaws.com true}]\nI0908 02:54:37.863117       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-44znn: [{A kops-controller.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io. 172.20.56.43 false}]\nI0908 02:54:42.825729       1 dnscache.go:74] querying all DNS zones (no cached results)\nI0908 02:54:43.288183       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0908 02:54:43.288213       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0908 02:54:45.239909       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io.} [172.20.56.43]\nI0908 02:54:45.239954       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0908 02:55:07.472328       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal: [{_alias api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io. node/ip-172-20-56-43.eu-west-3.compute.internal/external false}]\nI0908 02:55:10.523660       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0908 02:55:10.523687       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0908 02:55:12.790924       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io.} [15.237.113.83]\nI0908 02:55:12.790956       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0908 02:55:14.479645       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal: [{_alias api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io. node/ip-172-20-56-43.eu-west-3.compute.internal/external false} {A api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io. 172.20.56.43 false}]\nI0908 02:55:18.037841       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0908 02:55:18.037880       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0908 02:55:19.890884       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io.} [172.20.56.43]\nI0908 02:55:19.890915       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0908 02:55:30.786711       1 dnscontroller.go:625] Update desired state: node/ip-172-20-36-200.eu-west-3.compute.internal: [{A node/ip-172-20-36-200.eu-west-3.compute.internal/internal 172.20.36.200 true} {A node/ip-172-20-36-200.eu-west-3.compute.internal/external 15.237.46.246 true} {A node/role=node/internal 172.20.36.200 true} {A node/role=node/external 15.237.46.246 true} {A node/role=node/ ip-172-20-36-200.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-36-200.eu-west-3.compute.internal true} {A node/role=node/ ec2-15-237-46-246.eu-west-3.compute.amazonaws.com true}]\nI0908 02:55:40.111290       1 dnscontroller.go:625] Update desired state: node/ip-172-20-51-126.eu-west-3.compute.internal: [{A node/ip-172-20-51-126.eu-west-3.compute.internal/internal 172.20.51.126 true} {A node/ip-172-20-51-126.eu-west-3.compute.internal/external 35.181.160.177 true} {A node/role=node/internal 172.20.51.126 true} {A node/role=node/external 35.181.160.177 true} {A node/role=node/ ip-172-20-51-126.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-51-126.eu-west-3.compute.internal true} {A node/role=node/ ec2-35-181-160-177.eu-west-3.compute.amazonaws.com true}]\nI0908 02:55:40.174739       1 dnscontroller.go:625] Update desired state: node/ip-172-20-49-112.eu-west-3.compute.internal: [{A node/ip-172-20-49-112.eu-west-3.compute.internal/internal 172.20.49.112 true} {A node/ip-172-20-49-112.eu-west-3.compute.internal/external 13.36.234.230 true} {A node/role=node/internal 172.20.49.112 true} {A node/role=node/external 13.36.234.230 true} {A node/role=node/ ip-172-20-49-112.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-49-112.eu-west-3.compute.internal true} {A node/role=node/ ec2-13-36-234-230.eu-west-3.compute.amazonaws.com true}]\nI0908 02:55:45.310867       1 dnscontroller.go:625] Update desired state: node/ip-172-20-36-148.eu-west-3.compute.internal: [{A node/ip-172-20-36-148.eu-west-3.compute.internal/internal 172.20.36.148 true} {A node/ip-172-20-36-148.eu-west-3.compute.internal/external 15.188.144.216 true} {A node/role=node/internal 172.20.36.148 true} {A node/role=node/external 15.188.144.216 true} {A node/role=node/ ip-172-20-36-148.eu-west-3.compute.internal true} {A node/role=node/ ip-172-20-36-148.eu-west-3.compute.internal true} {A node/role=node/ ec2-15-188-144-216.eu-west-3.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-59b7d7865d-jp8tb ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal ====\netcd-manager\nI0908 02:53:39.253371    9271 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0908 02:53:39.256002    9271 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0908 02:53:39.257035    9271 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0908 02:53:39.257553    9271 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0908 02:53:39.258132    9271 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0908 02:53:39.258642    9271 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI0908 02:53:39.261041    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:53:39.405714    9271 mounter.go:304] Trying to mount master volume: \"vol-0c5c56e6fef933093\"\nI0908 02:53:39.405736    9271 volumes.go:331] Trying to attach volume \"vol-0c5c56e6fef933093\" at \"/dev/xvdu\"\nI0908 02:53:39.405890    9271 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0908 02:53:39.738686    9271 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-09-08 02:53:39.733 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-0b4df721753a52316\",\n  State: \"attaching\",\n  VolumeId: \"vol-0c5c56e6fef933093\"\n}\nI0908 02:53:39.738853    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:53:39.825369    9271 mounter.go:318] Currently attached volumes: [0xc000714380]\nI0908 02:53:39.825390    9271 mounter.go:72] Master volume \"vol-0c5c56e6fef933093\" is attached at \"/dev/xvdu\"\nI0908 02:53:39.825612    9271 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0c5c56e6fef933093\nI0908 02:53:39.825649    9271 volumes.go:234] volume vol-0c5c56e6fef933093 not mounted at /rootfs/dev/xvdu\nI0908 02:53:39.825666    9271 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0c5c56e6fef933093\"\nI0908 02:53:39.825676    9271 volumes.go:251] volume vol-0c5c56e6fef933093 not mounted at nvme-Amazon_Elastic_Block_Store_vol0c5c56e6fef933093\nI0908 02:53:39.825686    9271 mounter.go:121] Waiting for volume \"vol-0c5c56e6fef933093\" to be mounted\nI0908 02:53:40.825938    9271 volumes.go:234] volume vol-0c5c56e6fef933093 not mounted at /rootfs/dev/xvdu\nI0908 02:53:40.825999    9271 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0c5c56e6fef933093\" at \"/dev/nvme1n1\"\nI0908 02:53:40.826011    9271 mounter.go:125] Found volume \"vol-0c5c56e6fef933093\" mounted at device \"/dev/nvme1n1\"\nI0908 02:53:40.826995    9271 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0c5c56e6fef933093\"\nI0908 02:53:40.827090    9271 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-0c5c56e6fef933093\"\nI0908 02:53:40.827114    9271 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0908 02:53:40.827134    9271 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0908 02:53:40.850997    9271 mount_linux.go:449] Output: \"\"\nI0908 02:53:40.851020    9271 mount_linux.go:408] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI0908 02:53:40.851031    9271 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI0908 02:53:41.295927    9271 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-0c5c56e6fef933093\nI0908 02:53:41.295945    9271 mount_linux.go:436] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-0c5c56e6fef933093\nI0908 02:53:41.295961    9271 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-0c5c56e6fef933093 ext4 [defaults]\nI0908 02:53:41.295990    9271 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0c5c56e6fef933093 --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-0c5c56e6fef933093]\nI0908 02:53:41.394142    9271 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-0c5c56e6fef933093: Running scope as unit run-9332.scope.\nI0908 02:53:41.394166    9271 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0908 02:53:41.394188    9271 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0908 02:53:41.416906    9271 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI0908 02:53:41.416931    9271 resizefs_linux.go:55] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI0908 02:53:41.416943    9271 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI0908 02:53:41.421860    9271 resizefs_linux.go:70] Device /dev/nvme1n1 resized successfully\nI0908 02:53:41.442976    9271 mount_linux.go:202] Cannot run systemd-run, assuming non-systemd OS\nI0908 02:53:41.442988    9271 mount_linux.go:203] systemd-run output: Failed to request invocation ID for scope: Unknown property or interface.\n, failed with: exit status 1\nI0908 02:53:41.445768    9271 mounter.go:224] mounting inside container: /rootfs/dev/nvme1n1 -> /rootfs/mnt/master-vol-0c5c56e6fef933093\nI0908 02:53:41.445791    9271 mount_linux.go:175] Mounting cmd (mount) with arguments ( /rootfs/dev/nvme1n1 /rootfs/mnt/master-vol-0c5c56e6fef933093)\nI0908 02:53:41.451842    9271 mounter.go:94] mounted master volume \"vol-0c5c56e6fef933093\" on /mnt/master-vol-0c5c56e6fef933093\nI0908 02:53:41.451861    9271 main.go:320] discovered IP address: 172.20.56.43\nI0908 02:53:41.451866    9271 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0c5c56e6fef933093\nI0908 02:53:41.820055    9271 certs.go:211] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI0908 02:53:42.437160    9271 certs.go:211] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI0908 02:53:42.441217    9271 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI0908 02:53:42.442047    9271 main.go:473] peerClientIPs: [172.20.56.43]\nI0908 02:53:42.625408    9271 certs.go:211] generating certificate for \"etcd-manager-etcd-events-a\"\nI0908 02:53:42.628554    9271 server.go:105] GRPC server listening on \"172.20.56.43:3997\"\nI0908 02:53:42.628804    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:53:42.727085    9271 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:53:42.766507    9271 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.56.43 0} {172.20.56.43 0}]\nI0908 02:53:42.766544    9271 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:42.766697    9271 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI0908 02:53:44.628958    9271 controller.go:187] starting controller iteration\nI0908 02:53:44.629301    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:53:44.629495    9271 commands.go:41] refreshing commands\nI0908 02:53:44.629579    9271 s3context.go:334] product_uuid is \"ec2eb5d9-89b3-c8a2-5fce-864d2d240e9e\", assuming running on EC2\nI0908 02:53:44.630811    9271 s3context.go:166] got region from metadata: \"eu-west-3\"\nI0908 02:53:44.656957    9271 s3context.go:213] found bucket in region \"us-west-1\"\nI0908 02:53:45.250550    9271 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0908 02:53:45.250575    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0908 02:53:55.417796    9271 controller.go:187] starting controller iteration\nI0908 02:53:55.417838    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:53:55.418095    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:53:55.418246    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:53:55.418545    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > }\nI0908 02:53:55.418608    9271 controller.go:301] etcd cluster members: map[]\nI0908 02:53:55.418625    9271 controller.go:639] sending member map to all peers: \nI0908 02:53:55.418855    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:53:55.418865    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:53:56.007732    9271 controller.go:357] detected that there is no existing cluster\nI0908 02:53:56.007745    9271 commands.go:41] refreshing commands\nI0908 02:53:56.234103    9271 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0908 02:53:56.234121    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0908 02:53:56.388353    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:53:56.388553    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:56.388567    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:56.388614    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:53:56.388696    9271 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > }]\nI0908 02:53:56.388995    9271 newcluster.go:153] JoinClusterResponse: \nI0908 02:53:56.389585    9271 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0908 02:53:56.389627    9271 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\nI0908 02:53:56.392176    9271 pki.go:58] adding peerClientIPs [172.20.56.43]\nI0908 02:53:56.392197    9271 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[172.20.56.43 127.0.0.1]} Usages:[2 1]}\nI0908 02:53:56.796711    9271 certs.go:211] generating certificate for \"etcd-events-a\"\nI0908 02:53:56.801069    9271 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0908 02:53:57.142488    9271 certs.go:211] generating certificate for \"etcd-events-a\"\nI0908 02:53:57.370153    9271 certs.go:211] generating certificate for \"etcd-events-a\"\nI0908 02:53:57.372029    9271 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0908 02:53:57.372929    9271 newcluster.go:171] JoinClusterResponse: \nI0908 02:53:57.373001    9271 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0908 02:53:57.373019    9271 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-09-08 02:53:57.380245 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\n2021-09-08 02:53:57.380281 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.crt\n2021-09-08 02:53:57.380288 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:57.380299 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\n2021-09-08 02:53:57.380312 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-08 02:53:57.380335 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\n2021-09-08 02:53:57.380343 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\n2021-09-08 02:53:57.380348 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-09-08 02:53:57.380355 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=zTgSsun4YIoN9xKnIdeJKA\n2021-09-08 02:53:57.380362 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.key\n2021-09-08 02:53:57.380370 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-09-08 02:53:57.380382 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-09-08 02:53:57.380390 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-08 02:53:57.380400 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-08 02:53:57.380413 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-09-08 02:53:57.380421 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.crt\n2021-09-08 02:53:57.380428 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:57.380435 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.key\n2021-09-08 02:53:57.380442 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/ca.crt\n2021-09-08 02:53:57.380456 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/ca.crt\n2021-09-08 02:53:57.380467 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.380Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.380Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.crt, key = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.381Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.381Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"zTgSsun4YIoN9xKnIdeJKA\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.384Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA/member/snap/db\",\"took\":\"2.509864ms\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.385Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.56.43:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.385Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.56.43:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.391Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"832c8c0a369078df\",\"cluster-id\":\"61ee3832eff450a5\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.392Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"832c8c0a369078df switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.392Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"832c8c0a369078df became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.392Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 832c8c0a369078df [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.392Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"832c8c0a369078df became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.392Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"832c8c0a369078df switched to configuration voters=(9452083693436827871)\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-08T02:53:57.395Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.399Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.401Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"832c8c0a369078df\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.402Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.crt, key = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.402Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"832c8c0a369078df\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.402Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.402Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"832c8c0a369078df\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.403Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"832c8c0a369078df switched to configuration voters=(9452083693436827871)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.403Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"61ee3832eff450a5\",\"local-member-id\":\"832c8c0a369078df\",\"added-peer-id\":\"832c8c0a369078df\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.592Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"832c8c0a369078df is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.592Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"832c8c0a369078df became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.592Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"832c8c0a369078df received MsgVoteResp from 832c8c0a369078df at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.592Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"832c8c0a369078df became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.592Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 832c8c0a369078df elected leader 832c8c0a369078df at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.592Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.593Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"61ee3832eff450a5\",\"local-member-id\":\"832c8c0a369078df\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.593Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.593Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.593Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"832c8c0a369078df\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/832c8c0a369078df/attributes\",\"cluster-id\":\"61ee3832eff450a5\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.594Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\nI0908 02:53:57.701352    9271 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:53:57.872601    9271 controller.go:187] starting controller iteration\nI0908 02:53:57.872622    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:53:57.872888    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:53:57.873000    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:53:57.873620    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995]\nI0908 02:53:57.887513    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0908 02:53:57.887604    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:53:57.887620    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:53:57.887801    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:57.887833    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:57.887885    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:53:57.887956    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:53:57.887971    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:53:58.046270    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:53:58.047025    9271 backup.go:128] performing snapshot save to /tmp/920013669/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.052Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.053Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.053Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.054Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.056Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI0908 02:53:58.056551    9271 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/2021-09-08T02:53:58Z-000001/etcd.backup.gz\"\nI0908 02:53:58.224355    9271 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/2021-09-08T02:53:58Z-000001/_etcd_backup.meta\"\nI0908 02:53:58.398960    9271 backup.go:153] backup complete: name:\"2021-09-08T02:53:58Z-000001\" \nI0908 02:53:58.399347    9271 controller.go:935] backup response: name:\"2021-09-08T02:53:58Z-000001\" \nI0908 02:53:58.399364    9271 controller.go:574] took backup: name:\"2021-09-08T02:53:58Z-000001\" \nI0908 02:53:58.564137    9271 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events: [2021-09-08T02:53:58Z-000001]\nI0908 02:53:58.564159    9271 cleanup.go:166] retaining backup \"2021-09-08T02:53:58Z-000001\"\nI0908 02:53:58.564190    9271 restore.go:98] Setting quarantined state to false\nI0908 02:53:58.564441    9271 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" cluster_name:\"etcd-events\" > \nI0908 02:53:58.564481    9271 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" cluster_name:\"etcd-events\" > \nI0908 02:53:58.564494    9271 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\nI0908 02:53:58.564523    9271 etcdprocess.go:131] Waiting for etcd to exit\nI0908 02:53:58.664606    9271 etcdprocess.go:131] Waiting for etcd to exit\nI0908 02:53:58.664619    9271 etcdprocess.go:136] Exited etcd: signal: killed\nI0908 02:53:58.664676    9271 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0908 02:53:58.664814    9271 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0908 02:53:58.664842    9271 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0908 02:53:58.664885    9271 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\nI0908 02:53:58.665061    9271 pki.go:58] adding peerClientIPs [172.20.56.43]\nI0908 02:53:58.665082    9271 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[172.20.56.43 127.0.0.1]} Usages:[2 1]}\nI0908 02:53:58.665352    9271 certs.go:151] existing certificate not valid after 2023-09-08T02:53:56Z; will regenerate\nI0908 02:53:58.665364    9271 certs.go:211] generating certificate for \"etcd-events-a\"\nI0908 02:53:58.667523    9271 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0908 02:53:58.667751    9271 certs.go:151] existing certificate not valid after 2023-09-08T02:53:57Z; will regenerate\nI0908 02:53:58.667762    9271 certs.go:211] generating certificate for \"etcd-events-a\"\nI0908 02:53:58.790635    9271 certs.go:211] generating certificate for \"etcd-events-a\"\nI0908 02:53:58.792506    9271 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0908 02:53:58.794722    9271 restore.go:116] ReconfigureResponse: \nI0908 02:53:58.795850    9271 controller.go:187] starting controller iteration\nI0908 02:53:58.795872    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:53:58.796073    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:53:58.796210    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:53:58.796620    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\n2021-09-08 02:53:58.800961 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\n2021-09-08 02:53:58.800996 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.crt\n2021-09-08 02:53:58.801003 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:58.801013 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\n2021-09-08 02:53:58.801025 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-08 02:53:58.801047 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\n2021-09-08 02:53:58.801052 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\n2021-09-08 02:53:58.801057 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-09-08 02:53:58.801064 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=zTgSsun4YIoN9xKnIdeJKA\n2021-09-08 02:53:58.801070 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.key\n2021-09-08 02:53:58.801078 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4002\n2021-09-08 02:53:58.801087 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-09-08 02:53:58.801093 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-08 02:53:58.801101 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-08 02:53:58.801111 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-09-08 02:53:58.801120 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.crt\n2021-09-08 02:53:58.801125 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:58.801131 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.key\n2021-09-08 02:53:58.801136 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/ca.crt\n2021-09-08 02:53:58.801150 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/ca.crt\n2021-09-08 02:53:58.801160 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.801Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.801Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.801Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.crt, key = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.801Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.802Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.802Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0c5c56e6fef933093/data/zTgSsun4YIoN9xKnIdeJKA/member/snap/db\",\"took\":\"143.511µs\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.803Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"61ee3832eff450a5\",\"local-member-id\":\"832c8c0a369078df\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.804Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"832c8c0a369078df switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.804Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"832c8c0a369078df became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.804Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 832c8c0a369078df [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-08T02:53:58.805Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.807Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.808Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"832c8c0a369078df\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.810Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.crt, key = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0c5c56e6fef933093/pki/zTgSsun4YIoN9xKnIdeJKA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"832c8c0a369078df\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"832c8c0a369078df switched to configuration voters=(9452083693436827871)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"61ee3832eff450a5\",\"local-member-id\":\"832c8c0a369078df\",\"added-peer-id\":\"832c8c0a369078df\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"61ee3832eff450a5\",\"local-member-id\":\"832c8c0a369078df\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.811Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.204Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"832c8c0a369078df is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.204Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"832c8c0a369078df became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.204Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"832c8c0a369078df received MsgVoteResp from 832c8c0a369078df at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.204Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"832c8c0a369078df became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.204Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 832c8c0a369078df elected leader 832c8c0a369078df at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.204Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"832c8c0a369078df\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]}\",\"request-path\":\"/0/members/832c8c0a369078df/attributes\",\"cluster-id\":\"61ee3832eff450a5\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.205Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\nI0908 02:54:00.221765    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:00.221870    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:54:00.221895    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:00.222092    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:00.222105    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:00.222154    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:00.222229    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:00.222241    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:54:00.376779    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:00.376868    9271 controller.go:555] controller loop complete\nI0908 02:54:10.378058    9271 controller.go:187] starting controller iteration\nI0908 02:54:10.378086    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:10.378349    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:54:10.378466    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:10.379071    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:54:10.393836    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:10.393909    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:54:10.393925    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:10.394110    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:10.394125    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:10.394177    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:10.394236    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:10.394245    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:54:10.993830    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:10.993910    9271 controller.go:555] controller loop complete\nI0908 02:54:20.995763    9271 controller.go:187] starting controller iteration\nI0908 02:54:20.995791    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:20.996073    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:54:20.996213    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:20.996776    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:54:21.008196    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:21.008263    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:54:21.008281    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:21.008446    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:21.008461    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:21.008508    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:21.008564    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:21.008579    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:54:21.617526    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:21.617592    9271 controller.go:555] controller loop complete\nI0908 02:54:31.619013    9271 controller.go:187] starting controller iteration\nI0908 02:54:31.619042    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:31.619288    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:54:31.619423    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:31.620006    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:54:31.635167    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:31.635236    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:54:31.635252    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:31.635436    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:31.635449    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:31.635493    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:31.635547    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:31.635557    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:54:32.239040    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:32.239121    9271 controller.go:555] controller loop complete\nI0908 02:54:42.240293    9271 controller.go:187] starting controller iteration\nI0908 02:54:42.240323    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:42.240582    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:54:42.240724    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:42.241286    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:54:42.255021    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:42.255107    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:54:42.255124    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:42.255278    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:42.255292    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:42.255346    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:42.255487    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:42.255503    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:54:42.770383    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:54:42.858948    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:42.859013    9271 controller.go:555] controller loop complete\nI0908 02:54:42.881936    9271 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:54:42.921456    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:42.921513    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:52.860279    9271 controller.go:187] starting controller iteration\nI0908 02:54:52.860308    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:52.860545    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:54:52.861357    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:54:52.862429    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:54:52.874643    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:52.874714    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:54:52.874731    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:52.874925    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:52.874939    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:52.874992    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:52.875067    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:52.875079    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:54:53.490090    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:53.490148    9271 controller.go:555] controller loop complete\nI0908 02:55:03.491864    9271 controller.go:187] starting controller iteration\nI0908 02:55:03.491893    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:03.492137    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:55:03.492287    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:03.492597    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:55:03.506266    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:03.506333    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:55:03.506348    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:03.506509    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:03.506524    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:03.506577    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:03.506639    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:03.506655    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:55:04.102689    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:04.102765    9271 controller.go:555] controller loop complete\nI0908 02:55:14.104948    9271 controller.go:187] starting controller iteration\nI0908 02:55:14.104979    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:14.105269    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:55:14.105394    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:14.106114    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:55:14.120213    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:14.120300    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:55:14.120315    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:14.120484    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:14.120499    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:14.120550    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:14.120614    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:14.120629    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:55:14.990100    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:14.990169    9271 controller.go:555] controller loop complete\nI0908 02:55:24.991392    9271 controller.go:187] starting controller iteration\nI0908 02:55:24.991422    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:24.991650    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:55:24.991769    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:24.992336    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:55:25.009260    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:25.009342    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:55:25.009357    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:25.009539    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:25.009551    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:25.009602    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:25.009662    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:25.009673    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:55:25.609867    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:25.609934    9271 controller.go:555] controller loop complete\nI0908 02:55:35.611185    9271 controller.go:187] starting controller iteration\nI0908 02:55:35.611212    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:35.611521    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:55:35.611668    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:35.612663    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:55:35.628423    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:35.628494    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:55:35.628509    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:35.628700    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:35.628719    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:35.628771    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:35.628866    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:35.628876    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:55:36.224154    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:36.224228    9271 controller.go:555] controller loop complete\nI0908 02:55:42.922026    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:55:43.028004    9271 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:55:43.065425    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:43.065494    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:46.225740    9271 controller.go:187] starting controller iteration\nI0908 02:55:46.225770    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:46.226073    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:55:46.226212    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:46.226653    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:55:46.238503    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:46.238573    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:55:46.238590    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:46.238790    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:46.238805    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:46.238870    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:46.238947    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:46.238960    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:55:46.830891    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:46.830964    9271 controller.go:555] controller loop complete\nI0908 02:55:56.832910    9271 controller.go:187] starting controller iteration\nI0908 02:55:56.832942    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:56.833179    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:55:56.833326    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:55:56.834034    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:55:56.847389    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:56.847467    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:55:56.847482    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:56.847658    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:56.847675    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:56.847723    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:56.847787    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:56.847802    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:55:57.453682    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:57.453756    9271 controller.go:555] controller loop complete\nI0908 02:56:07.455553    9271 controller.go:187] starting controller iteration\nI0908 02:56:07.455581    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:07.455897    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:56:07.456035    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:07.456576    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:56:07.470226    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:07.470291    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:56:07.470305    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:07.470503    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:07.470517    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:07.470572    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:07.470657    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:07.470671    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:56:08.068395    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:08.068469    9271 controller.go:555] controller loop complete\nI0908 02:56:18.069932    9271 controller.go:187] starting controller iteration\nI0908 02:56:18.069961    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:18.070252    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:56:18.070403    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:18.070747    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:56:18.082236    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:18.082330    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:56:18.082348    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:18.082537    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:18.082553    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:18.082608    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:18.082676    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:18.082692    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:56:18.690423    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:18.690494    9271 controller.go:555] controller loop complete\nI0908 02:56:28.692652    9271 controller.go:187] starting controller iteration\nI0908 02:56:28.692681    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:28.693008    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:56:28.693120    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:28.693689    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:56:28.706999    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:28.707079    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:56:28.707097    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:28.707292    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:28.707307    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:28.707362    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:28.707439    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:28.707454    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:56:29.307265    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:29.307337    9271 controller.go:555] controller loop complete\nI0908 02:56:39.308914    9271 controller.go:187] starting controller iteration\nI0908 02:56:39.308941    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:39.309189    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:56:39.309332    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:39.309880    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:56:39.321229    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:39.321305    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:56:39.321320    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:39.321511    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:39.321526    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:39.321580    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:39.321656    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:39.321670    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:56:39.916658    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:39.916728    9271 controller.go:555] controller loop complete\nI0908 02:56:43.066040    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:56:43.174875    9271 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:56:43.216557    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:43.216627    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:49.917927    9271 controller.go:187] starting controller iteration\nI0908 02:56:49.917956    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:49.918265    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:56:49.918416    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:56:49.918992    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:56:49.932902    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:49.932988    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:56:49.933003    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:49.933167    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:49.933183    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:49.933235    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:49.933333    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:49.933347    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:56:50.523467    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:50.523531    9271 controller.go:555] controller loop complete\nI0908 02:57:00.524674    9271 controller.go:187] starting controller iteration\nI0908 02:57:00.524703    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:00.524951    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:57:00.525096    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:00.525628    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:57:00.536778    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:00.536858    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:57:00.536873    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:00.537059    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:00.537075    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:00.537129    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:00.537205    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:00.537218    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:57:01.141291    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:01.141364    9271 controller.go:555] controller loop complete\nI0908 02:57:11.143472    9271 controller.go:187] starting controller iteration\nI0908 02:57:11.143504    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:11.143836    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:57:11.143969    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:11.144431    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:57:11.155885    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:11.155957    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:57:11.155973    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:11.156142    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:11.156158    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:11.156207    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:11.156270    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:11.156286    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:57:11.754457    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:11.754535    9271 controller.go:555] controller loop complete\nI0908 02:57:21.756214    9271 controller.go:187] starting controller iteration\nI0908 02:57:21.756243    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:21.756459    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:57:21.756599    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:21.757562    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:57:21.771763    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:21.771855    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:57:21.771874    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:21.772048    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:21.772059    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:21.772097    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:21.772145    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:21.772154    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:57:22.373852    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:22.373925    9271 controller.go:555] controller loop complete\nI0908 02:57:32.375378    9271 controller.go:187] starting controller iteration\nI0908 02:57:32.375408    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:32.375678    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:57:32.375809    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:32.376339    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:57:32.387771    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:32.387855    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:57:32.387871    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:32.388059    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:32.388075    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:32.388129    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:32.388203    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:32.388216    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:57:32.989408    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:32.989480    9271 controller.go:555] controller loop complete\nI0908 02:57:42.991021    9271 controller.go:187] starting controller iteration\nI0908 02:57:42.991048    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:42.991245    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:57:42.991372    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:42.991928    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:57:43.006647    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:43.006716    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:57:43.006732    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:43.006943    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:43.006961    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:43.007011    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:43.007095    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:43.007110    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:57:43.217135    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:57:43.326506    9271 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:57:43.381184    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:43.381256    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:43.603912    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:43.603985    9271 controller.go:555] controller loop complete\nI0908 02:57:53.605175    9271 controller.go:187] starting controller iteration\nI0908 02:57:53.605206    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:53.605453    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:57:53.605602    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:57:53.606013    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:57:53.617440    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:53.617508    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:57:53.617524    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:53.617710    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:53.617725    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:53.617780    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:53.617868    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:53.617883    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:57:54.224510    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:54.224581    9271 controller.go:555] controller loop complete\nI0908 02:58:04.226088    9271 controller.go:187] starting controller iteration\nI0908 02:58:04.226116    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:04.226350    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:58:04.226487    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:04.227010    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:58:04.238590    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:04.238658    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:58:04.238673    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:04.238904    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:04.238918    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:04.238970    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:04.239055    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:04.239069    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:58:04.841561    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:04.841634    9271 controller.go:555] controller loop complete\nI0908 02:58:14.843362    9271 controller.go:187] starting controller iteration\nI0908 02:58:14.843400    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:14.843618    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:58:14.843716    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:14.844231    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:58:14.857666    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:14.857743    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:58:14.857759    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:14.857964    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:14.857989    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:14.858050    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:14.858139    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:14.858156    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:58:15.453261    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:15.453333    9271 controller.go:555] controller loop complete\nI0908 02:58:25.455261    9271 controller.go:187] starting controller iteration\nI0908 02:58:25.455292    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:25.455553    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:58:25.455708    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:25.456100    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:58:25.467393    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:25.467465    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:58:25.467482    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:25.467669    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:25.467685    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:25.467740    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:25.467815    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:25.467846    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:58:26.067643    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:26.067716    9271 controller.go:555] controller loop complete\nI0908 02:58:36.069813    9271 controller.go:187] starting controller iteration\nI0908 02:58:36.069852    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:36.070081    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:58:36.070204    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:36.070835    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:58:36.082442    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:36.082508    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:58:36.082523    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:36.082696    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:36.082713    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:36.082768    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:36.082852    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:36.082870    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:58:36.680304    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:36.680377    9271 controller.go:555] controller loop complete\nI0908 02:58:43.381911    9271 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:58:43.500786    9271 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:58:43.539702    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:43.539768    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:46.681834    9271 controller.go:187] starting controller iteration\nI0908 02:58:46.681863    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:46.682135    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:58:46.682312    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:46.682721    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:58:46.694999    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:46.695069    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:58:46.695085    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:46.695276    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:46.695293    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:46.695356    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:46.695438    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:46.695454    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:58:47.297617    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:47.297694    9271 controller.go:555] controller loop complete\nI0908 02:58:57.298871    9271 controller.go:187] starting controller iteration\nI0908 02:58:57.298900    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:57.299130    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:58:57.299269    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:58:57.299893    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:58:57.313617    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:57.313689    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:58:57.313704    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:57.313925    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:57.313940    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:57.314000    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:57.314083    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:57.314097    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:58:57.907243    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:57.907315    9271 controller.go:555] controller loop complete\nI0908 02:59:07.909323    9271 controller.go:187] starting controller iteration\nI0908 02:59:07.909352    9271 controller.go:264] Broadcasting leadership assertion with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:59:07.909569    9271 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > leadership_token:\"lY0NAfXnbdfCHzT-7Dz8mw\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" > > \nI0908 02:59:07.909701    9271 controller.go:293] I am leader with token \"lY0NAfXnbdfCHzT-7Dz8mw\"\nI0908 02:59:07.910119    9271 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002]\nI0908 02:59:07.921691    9271 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.56.43:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"zTgSsun4YIoN9xKnIdeJKA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:59:07.921760    9271 controller.go:301] etcd cluster members: map[9452083693436827871:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4002\"],\"ID\":\"9452083693436827871\"}]\nI0908 02:59:07.921780    9271 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:59:07.921978    9271 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:59:07.921993    9271 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-events-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:59:07.922047    9271 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:59:07.922121    9271 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:59:07.922140    9271 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0908 02:59:08.511307    9271 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:59:08.511380    9271 controller.go:555] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-56-43.eu-west-3.compute.internal ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal ====\netcd-manager\nI0908 02:53:39.257894    9278 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0908 02:53:39.258905    9278 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0908 02:53:39.259524    9278 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0908 02:53:39.260059    9278 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0908 02:53:39.260659    9278 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0908 02:53:39.261297    9278 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/main k8s.io/role/master=1 kubernetes.io/cluster/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/main\nI0908 02:53:39.263432    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:53:39.415536    9278 mounter.go:304] Trying to mount master volume: \"vol-03f80f1c2112e07dd\"\nI0908 02:53:39.415553    9278 volumes.go:331] Trying to attach volume \"vol-03f80f1c2112e07dd\" at \"/dev/xvdu\"\nI0908 02:53:39.415664    9278 volumes.go:86] AWS API Request: ec2/AttachVolume\nW0908 02:53:39.670089    9278 volumes.go:343] Invalid value '/dev/xvdu' for unixDevice. Attachment point /dev/xvdu is already in use\nI0908 02:53:39.670106    9278 volumes.go:331] Trying to attach volume \"vol-03f80f1c2112e07dd\" at \"/dev/xvdv\"\nI0908 02:53:39.670227    9278 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0908 02:53:40.050502    9278 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-09-08 02:53:40.041 +0000 UTC,\n  Device: \"/dev/xvdv\",\n  InstanceId: \"i-0b4df721753a52316\",\n  State: \"attaching\",\n  VolumeId: \"vol-03f80f1c2112e07dd\"\n}\nI0908 02:53:40.050669    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:53:40.162724    9278 mounter.go:318] Currently attached volumes: [0xc00038bc80]\nI0908 02:53:40.162746    9278 mounter.go:72] Master volume \"vol-03f80f1c2112e07dd\" is attached at \"/dev/xvdv\"\nI0908 02:53:40.162771    9278 mounter.go:86] Doing safe-format-and-mount of /dev/xvdv to /mnt/master-vol-03f80f1c2112e07dd\nI0908 02:53:40.162792    9278 volumes.go:234] volume vol-03f80f1c2112e07dd not mounted at /rootfs/dev/xvdv\nI0908 02:53:40.162806    9278 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol03f80f1c2112e07dd\"\nI0908 02:53:40.162813    9278 volumes.go:251] volume vol-03f80f1c2112e07dd not mounted at nvme-Amazon_Elastic_Block_Store_vol03f80f1c2112e07dd\nI0908 02:53:40.162834    9278 mounter.go:121] Waiting for volume \"vol-03f80f1c2112e07dd\" to be mounted\nI0908 02:53:41.162928    9278 volumes.go:234] volume vol-03f80f1c2112e07dd not mounted at /rootfs/dev/xvdv\nI0908 02:53:41.162978    9278 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol03f80f1c2112e07dd\" at \"/dev/nvme2n1\"\nI0908 02:53:41.162988    9278 mounter.go:125] Found volume \"vol-03f80f1c2112e07dd\" mounted at device \"/dev/nvme2n1\"\nI0908 02:53:41.163536    9278 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-03f80f1c2112e07dd\"\nI0908 02:53:41.163639    9278 mounter.go:176] Mounting device \"/dev/nvme2n1\" on \"/mnt/master-vol-03f80f1c2112e07dd\"\nI0908 02:53:41.163662    9278 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0908 02:53:41.163681    9278 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0908 02:53:41.185044    9278 mount_linux.go:449] Output: \"\"\nI0908 02:53:41.185072    9278 mount_linux.go:408] Disk \"/dev/nvme2n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme2n1]\nI0908 02:53:41.185090    9278 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme2n1]\nI0908 02:53:41.566879    9278 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme2n1 /mnt/master-vol-03f80f1c2112e07dd\nI0908 02:53:41.566895    9278 mount_linux.go:436] Attempting to mount disk /dev/nvme2n1 in ext4 format at /mnt/master-vol-03f80f1c2112e07dd\nI0908 02:53:41.566910    9278 nsenter.go:80] nsenter mount /dev/nvme2n1 /mnt/master-vol-03f80f1c2112e07dd ext4 [defaults]\nI0908 02:53:41.566935    9278 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-03f80f1c2112e07dd --scope -- /bin/mount -t ext4 -o defaults /dev/nvme2n1 /mnt/master-vol-03f80f1c2112e07dd]\nI0908 02:53:41.646549    9278 nsenter.go:84] Output of mounting /dev/nvme2n1 to /mnt/master-vol-03f80f1c2112e07dd: Running scope as unit run-9370.scope.\nI0908 02:53:41.646582    9278 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0908 02:53:41.646606    9278 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0908 02:53:41.649844    9278 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme2n1\\nTYPE=ext4\\n\"\nI0908 02:53:41.649863    9278 resizefs_linux.go:55] ResizeFS.Resize - Expanding mounted volume /dev/nvme2n1\nI0908 02:53:41.649875    9278 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme2n1]\nI0908 02:53:41.652288    9278 resizefs_linux.go:70] Device /dev/nvme2n1 resized successfully\nI0908 02:53:41.687733    9278 mount_linux.go:202] Cannot run systemd-run, assuming non-systemd OS\nI0908 02:53:41.687749    9278 mount_linux.go:203] systemd-run output: Failed to request invocation ID for scope: Unknown property or interface.\n, failed with: exit status 1\nI0908 02:53:41.689603    9278 mounter.go:224] mounting inside container: /rootfs/dev/nvme2n1 -> /rootfs/mnt/master-vol-03f80f1c2112e07dd\nI0908 02:53:41.689625    9278 mount_linux.go:175] Mounting cmd (mount) with arguments ( /rootfs/dev/nvme2n1 /rootfs/mnt/master-vol-03f80f1c2112e07dd)\nI0908 02:53:41.698286    9278 mounter.go:94] mounted master volume \"vol-03f80f1c2112e07dd\" on /mnt/master-vol-03f80f1c2112e07dd\nI0908 02:53:41.698311    9278 main.go:320] discovered IP address: 172.20.56.43\nI0908 02:53:41.698316    9278 main.go:325] Setting data dir to /rootfs/mnt/master-vol-03f80f1c2112e07dd\nI0908 02:53:41.998262    9278 certs.go:211] generating certificate for \"etcd-manager-server-etcd-a\"\nI0908 02:53:42.195775    9278 certs.go:211] generating certificate for \"etcd-manager-client-etcd-a\"\nI0908 02:53:42.203778    9278 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-a\"\nI0908 02:53:42.204620    9278 main.go:473] peerClientIPs: [172.20.56.43]\nI0908 02:53:42.400260    9278 certs.go:211] generating certificate for \"etcd-manager-etcd-a\"\nI0908 02:53:42.403464    9278 server.go:105] GRPC server listening on \"172.20.56.43:3996\"\nI0908 02:53:42.403911    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:53:42.537908    9278 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:53:42.577984    9278 peers.go:115] found new candidate peer from discovery: etcd-a [{172.20.56.43 0} {172.20.56.43 0}]\nI0908 02:53:42.578021    9278 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:42.578186    9278 peers.go:295] connecting to peer \"etcd-a\" with TLS policy, servername=\"etcd-manager-server-etcd-a\"\nI0908 02:53:44.403838    9278 controller.go:187] starting controller iteration\nI0908 02:53:44.404235    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:53:44.404457    9278 commands.go:41] refreshing commands\nI0908 02:53:44.404856    9278 s3context.go:334] product_uuid is \"ec2eb5d9-89b3-c8a2-5fce-864d2d240e9e\", assuming running on EC2\nI0908 02:53:44.406230    9278 s3context.go:166] got region from metadata: \"eu-west-3\"\nI0908 02:53:44.432626    9278 s3context.go:213] found bucket in region \"us-west-1\"\nI0908 02:53:45.113981    9278 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0908 02:53:45.114007    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0908 02:53:55.273844    9278 controller.go:187] starting controller iteration\nI0908 02:53:55.273872    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:53:55.274121    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:53:55.274272    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:53:55.274528    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > }\nI0908 02:53:55.274987    9278 controller.go:301] etcd cluster members: map[]\nI0908 02:53:55.275009    9278 controller.go:639] sending member map to all peers: \nI0908 02:53:55.275669    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:53:55.275685    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:53:55.879271    9278 controller.go:357] detected that there is no existing cluster\nI0908 02:53:55.879286    9278 commands.go:41] refreshing commands\nI0908 02:53:56.113770    9278 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0908 02:53:56.113791    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0908 02:53:56.267860    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:53:56.268111    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:56.268134    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:56.268194    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:53:56.268286    9278 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > }]\nI0908 02:53:56.268552    9278 newcluster.go:153] JoinClusterResponse: \nI0908 02:53:56.269720    9278 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0908 02:53:56.269773    9278 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\nI0908 02:53:56.270523    9278 pki.go:58] adding peerClientIPs [172.20.56.43]\nI0908 02:53:56.270546    9278 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[172.20.56.43 127.0.0.1]} Usages:[2 1]}\nI0908 02:53:56.497965    9278 certs.go:211] generating certificate for \"etcd-a\"\nI0908 02:53:56.501356    9278 pki.go:108] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0908 02:53:56.964281    9278 certs.go:211] generating certificate for \"etcd-a\"\nI0908 02:53:57.211689    9278 certs.go:211] generating certificate for \"etcd-a\"\nI0908 02:53:57.214511    9278 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0908 02:53:57.221929    9278 newcluster.go:171] JoinClusterResponse: \nI0908 02:53:57.221991    9278 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0908 02:53:57.222006    9278 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-09-08 02:53:57.225071 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\n2021-09-08 02:53:57.225108 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.crt\n2021-09-08 02:53:57.225116 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:57.225125 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\n2021-09-08 02:53:57.225138 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-08 02:53:57.225159 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\n2021-09-08 02:53:57.225164 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\n2021-09-08 02:53:57.225169 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-09-08 02:53:57.225175 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=WXVnfJEHTzHvFx4g6ALsig\n2021-09-08 02:53:57.225181 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.key\n2021-09-08 02:53:57.225187 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3994\n2021-09-08 02:53:57.225195 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-09-08 02:53:57.225202 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-08 02:53:57.225209 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-08 02:53:57.225218 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-09-08 02:53:57.225224 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.crt\n2021-09-08 02:53:57.225229 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:57.225236 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.key\n2021-09-08 02:53:57.225241 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/ca.crt\n2021-09-08 02:53:57.225255 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/ca.crt\n2021-09-08 02:53:57.225264 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.225Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.225Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.crt, key = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.226Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3994\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.226Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-a=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"WXVnfJEHTzHvFx4g6ALsig\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.232Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig/member/snap/db\",\"took\":\"4.515099ms\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.233Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.56.43:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.233Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.56.43:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.241Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"fd18cb622346e984\",\"cluster-id\":\"cf6254bfb508adc5\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.241Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"fd18cb622346e984 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.241Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"fd18cb622346e984 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.241Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft fd18cb622346e984 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.241Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"fd18cb622346e984 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.241Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"fd18cb622346e984 switched to configuration voters=(18237550313395906948)\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-08T02:53:57.247Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.250Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.253Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"fd18cb622346e984\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.257Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.crt, key = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.257Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"fd18cb622346e984\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.257Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.257Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"fd18cb622346e984\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.258Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"fd18cb622346e984 switched to configuration voters=(18237550313395906948)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:57.258Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"cf6254bfb508adc5\",\"local-member-id\":\"fd18cb622346e984\",\"added-peer-id\":\"fd18cb622346e984\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"]}\nI0908 02:53:57.537610    9278 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:53:57.708777    9278 controller.go:187] starting controller iteration\nI0908 02:53:57.708797    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:53:57.709051    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:53:57.709191    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:53:57.709945    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994]\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.242Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"fd18cb622346e984 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.242Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"fd18cb622346e984 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.242Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"fd18cb622346e984 received MsgVoteResp from fd18cb622346e984 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.242Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"fd18cb622346e984 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.242Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: fd18cb622346e984 elected leader fd18cb622346e984 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.242Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.243Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"cf6254bfb508adc5\",\"local-member-id\":\"fd18cb622346e984\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.243Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.243Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.243Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"fd18cb622346e984\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994]}\",\"request-path\":\"/0/members/fd18cb622346e984/attributes\",\"cluster-id\":\"cf6254bfb508adc5\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.244Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3994\"}\nI0908 02:53:58.258904    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0908 02:53:58.259007    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:53:58.259024    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:53:58.259239    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:58.259254    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:53:58.259307    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:53:58.259400    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:53:58.259414    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:53:58.428264    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:53:58.429093    9278 backup.go:128] performing snapshot save to /tmp/314525251/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.434Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.434Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.435Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.435Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:58.436Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI0908 02:53:58.436764    9278 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/2021-09-08T02:53:58Z-000001/etcd.backup.gz\"\nI0908 02:53:58.627794    9278 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/2021-09-08T02:53:58Z-000001/_etcd_backup.meta\"\nI0908 02:53:58.806361    9278 backup.go:153] backup complete: name:\"2021-09-08T02:53:58Z-000001\" \nI0908 02:53:58.806729    9278 controller.go:935] backup response: name:\"2021-09-08T02:53:58Z-000001\" \nI0908 02:53:58.806746    9278 controller.go:574] took backup: name:\"2021-09-08T02:53:58Z-000001\" \nI0908 02:53:58.973713    9278 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main: [2021-09-08T02:53:58Z-000001]\nI0908 02:53:58.973733    9278 cleanup.go:166] retaining backup \"2021-09-08T02:53:58Z-000001\"\nI0908 02:53:58.973759    9278 restore.go:98] Setting quarantined state to false\nI0908 02:53:58.974050    9278 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" cluster_name:\"etcd\" > \nI0908 02:53:58.974093    9278 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" cluster_name:\"etcd\" > \nI0908 02:53:58.974110    9278 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\nI0908 02:53:58.974134    9278 etcdprocess.go:131] Waiting for etcd to exit\nI0908 02:53:59.074912    9278 etcdprocess.go:131] Waiting for etcd to exit\nI0908 02:53:59.074924    9278 etcdprocess.go:136] Exited etcd: signal: killed\nI0908 02:53:59.074977    9278 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0908 02:53:59.075093    9278 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0908 02:53:59.075105    9278 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0908 02:53:59.075139    9278 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\nI0908 02:53:59.075310    9278 pki.go:58] adding peerClientIPs [172.20.56.43]\nI0908 02:53:59.075332    9278 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[172.20.56.43 127.0.0.1]} Usages:[2 1]}\nI0908 02:53:59.075576    9278 certs.go:151] existing certificate not valid after 2023-09-08T02:53:56Z; will regenerate\nI0908 02:53:59.075599    9278 certs.go:211] generating certificate for \"etcd-a\"\nI0908 02:53:59.077817    9278 pki.go:108] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0908 02:53:59.078005    9278 certs.go:151] existing certificate not valid after 2023-09-08T02:53:56Z; will regenerate\nI0908 02:53:59.078017    9278 certs.go:211] generating certificate for \"etcd-a\"\nI0908 02:53:59.330965    9278 certs.go:211] generating certificate for \"etcd-a\"\nI0908 02:53:59.332830    9278 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0908 02:53:59.334120    9278 restore.go:116] ReconfigureResponse: \nI0908 02:53:59.335287    9278 controller.go:187] starting controller iteration\nI0908 02:53:59.335306    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:53:59.335514    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:53:59.335619    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:53:59.336013    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\n2021-09-08 02:53:59.341199 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\n2021-09-08 02:53:59.341232 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.crt\n2021-09-08 02:53:59.341240 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:59.341250 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\n2021-09-08 02:53:59.341262 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-08 02:53:59.341285 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\n2021-09-08 02:53:59.341290 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\n2021-09-08 02:53:59.341295 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-09-08 02:53:59.341301 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=WXVnfJEHTzHvFx4g6ALsig\n2021-09-08 02:53:59.341307 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.key\n2021-09-08 02:53:59.341314 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4001\n2021-09-08 02:53:59.341321 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-09-08 02:53:59.341328 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-08 02:53:59.341336 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-08 02:53:59.341347 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-09-08 02:53:59.341354 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.crt\n2021-09-08 02:53:59.341359 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-08 02:53:59.341366 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.key\n2021-09-08 02:53:59.341372 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/ca.crt\n2021-09-08 02:53:59.341385 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/ca.crt\n2021-09-08 02:53:59.341395 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.341Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.341Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.341Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.crt, key = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.342Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4001\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.342Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.342Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-03f80f1c2112e07dd/data/WXVnfJEHTzHvFx4g6ALsig/member/snap/db\",\"took\":\"133.133µs\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.343Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"cf6254bfb508adc5\",\"local-member-id\":\"fd18cb622346e984\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.344Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"fd18cb622346e984 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.344Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"fd18cb622346e984 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.344Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft fd18cb622346e984 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-08T02:53:59.345Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.346Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.348Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"fd18cb622346e984\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.348Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.348Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"fd18cb622346e984 switched to configuration voters=(18237550313395906948)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.348Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"cf6254bfb508adc5\",\"local-member-id\":\"fd18cb622346e984\",\"added-peer-id\":\"fd18cb622346e984\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.348Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"cf6254bfb508adc5\",\"local-member-id\":\"fd18cb622346e984\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.348Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.350Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.crt, key = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-03f80f1c2112e07dd/pki/WXVnfJEHTzHvFx4g6ALsig/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.350Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"fd18cb622346e984\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:53:59.350Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.844Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"fd18cb622346e984 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.844Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"fd18cb622346e984 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.844Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"fd18cb622346e984 received MsgVoteResp from fd18cb622346e984 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.844Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"fd18cb622346e984 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.844Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: fd18cb622346e984 elected leader fd18cb622346e984 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.844Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"fd18cb622346e984\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]}\",\"request-path\":\"/0/members/fd18cb622346e984/attributes\",\"cluster-id\":\"cf6254bfb508adc5\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-08T02:54:00.846Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4001\"}\nI0908 02:54:00.862128    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:00.862246    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:54:00.862260    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:00.862452    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:00.862467    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:00.862516    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:00.862583    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:00.862599    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:54:01.016792    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:01.016872    9278 controller.go:555] controller loop complete\nI0908 02:54:11.018070    9278 controller.go:187] starting controller iteration\nI0908 02:54:11.018096    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:11.018341    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:54:11.018540    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:11.019201    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:54:11.032949    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:11.033017    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:54:11.033032    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:11.033215    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:11.033230    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:11.033281    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:11.033355    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:11.033369    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:54:11.632309    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:11.632384    9278 controller.go:555] controller loop complete\nI0908 02:54:21.634838    9278 controller.go:187] starting controller iteration\nI0908 02:54:21.634868    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:21.635103    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:54:21.635230    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:21.635643    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:54:21.653934    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:21.654010    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:54:21.654025    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:21.654212    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:21.654228    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:21.654275    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:21.654341    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:21.654355    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:54:22.253589    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:22.253660    9278 controller.go:555] controller loop complete\nI0908 02:54:32.255622    9278 controller.go:187] starting controller iteration\nI0908 02:54:32.255649    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:32.255927    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:54:32.256045    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:32.256660    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:54:32.269204    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:32.269280    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:54:32.269297    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:32.269498    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:32.269513    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:32.269571    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:32.269649    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:32.269663    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:54:32.874718    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:32.874785    9278 controller.go:555] controller loop complete\nI0908 02:54:42.582708    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:54:42.692363    9278 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:54:42.731872    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:42.731946    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:42.876154    9278 controller.go:187] starting controller iteration\nI0908 02:54:42.876178    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:42.876410    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:54:42.876529    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:42.877036    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:54:42.888618    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:42.888692    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:54:42.888708    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:42.888912    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:42.888929    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:42.888978    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:42.889057    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:42.889070    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:54:43.484503    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:43.484581    9278 controller.go:555] controller loop complete\nI0908 02:54:53.486378    9278 controller.go:187] starting controller iteration\nI0908 02:54:53.486407    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:53.486619    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:54:53.486734    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:54:53.487330    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:54:53.501402    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:54:53.501473    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:54:53.501489    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:54:53.501676    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:53.501692    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:54:53.501743    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:54:53.501842    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:54:53.501854    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:54:54.100067    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:54:54.100135    9278 controller.go:555] controller loop complete\nI0908 02:55:04.102854    9278 controller.go:187] starting controller iteration\nI0908 02:55:04.102880    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:04.103095    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:55:04.103200    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:04.103500    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:55:04.120526    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:04.120612    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:55:04.120629    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:04.120864    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:04.120881    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:04.120930    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:04.121015    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:04.121028    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:55:04.720308    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:04.720377    9278 controller.go:555] controller loop complete\nI0908 02:55:14.721508    9278 controller.go:187] starting controller iteration\nI0908 02:55:14.721535    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:14.721797    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:55:14.721921    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:14.722519    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:55:14.734361    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:14.734432    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:55:14.734449    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:14.734651    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:14.734665    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:14.734721    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:14.734800    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:14.734814    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:55:15.335961    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:15.336022    9278 controller.go:555] controller loop complete\nI0908 02:55:25.337263    9278 controller.go:187] starting controller iteration\nI0908 02:55:25.337292    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:25.337601    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:55:25.337737    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:25.338167    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:55:25.350047    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:25.350127    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:55:25.350144    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:25.350357    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:25.350373    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:25.350428    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:25.350506    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:25.350519    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:55:25.955308    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:25.955374    9278 controller.go:555] controller loop complete\nI0908 02:55:35.957600    9278 controller.go:187] starting controller iteration\nI0908 02:55:35.957626    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:35.957873    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:55:35.958023    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:35.958572    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:55:35.972249    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:35.972326    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:55:35.972340    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:35.972524    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:35.972538    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:35.972591    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:35.972667    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:35.972680    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:55:36.567772    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:36.567861    9278 controller.go:555] controller loop complete\nI0908 02:55:42.733056    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:55:42.841878    9278 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:55:42.883769    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:42.883863    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:46.569292    9278 controller.go:187] starting controller iteration\nI0908 02:55:46.569318    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:46.569609    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:55:46.569757    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:46.570166    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:55:46.582637    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:46.582722    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:55:46.582735    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:46.582901    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:46.582911    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:46.582949    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:46.582995    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:46.583004    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:55:47.179299    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:47.179372    9278 controller.go:555] controller loop complete\nI0908 02:55:57.181177    9278 controller.go:187] starting controller iteration\nI0908 02:55:57.181208    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:57.181505    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:55:57.181636    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:55:57.182211    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:55:57.197344    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:55:57.197442    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:55:57.197457    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:55:57.197584    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:57.197597    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:55:57.197648    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:55:57.197914    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:55:57.197928    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:55:57.794757    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:55:57.794839    9278 controller.go:555] controller loop complete\nI0908 02:56:07.796136    9278 controller.go:187] starting controller iteration\nI0908 02:56:07.796164    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:07.796395    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:56:07.796533    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:07.796942    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:56:07.816014    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:07.816109    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:56:07.816128    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:07.816265    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:07.816292    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:07.816335    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:07.816421    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:07.816435    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:56:08.414460    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:08.414532    9278 controller.go:555] controller loop complete\nI0908 02:56:18.415642    9278 controller.go:187] starting controller iteration\nI0908 02:56:18.415670    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:18.415901    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:56:18.416016    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:18.416314    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:56:18.427930    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:18.428002    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:56:18.428017    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:18.428202    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:18.428217    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:18.428270    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:18.428343    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:18.428358    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:56:19.028754    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:19.028837    9278 controller.go:555] controller loop complete\nI0908 02:56:29.030444    9278 controller.go:187] starting controller iteration\nI0908 02:56:29.030472    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:29.030702    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:56:29.030856    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:29.031226    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:56:29.042682    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:29.042781    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:56:29.042798    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:29.042999    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:29.043014    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:29.043052    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:29.043113    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:29.043123    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:56:29.637705    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:29.637772    9278 controller.go:555] controller loop complete\nI0908 02:56:39.639446    9278 controller.go:187] starting controller iteration\nI0908 02:56:39.639471    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:39.639751    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:56:39.639899    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:39.640445    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:56:39.654055    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:39.654126    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:56:39.654142    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:39.654303    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:39.654318    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:39.654373    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:39.654448    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:39.654464    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:56:40.255580    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:40.255651    9278 controller.go:555] controller loop complete\nI0908 02:56:42.884856    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:56:42.994059    9278 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:56:43.034064    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:43.034130    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:50.256830    9278 controller.go:187] starting controller iteration\nI0908 02:56:50.256858    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:50.257103    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:56:50.257235    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:56:50.257672    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:56:50.269258    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:56:50.269326    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:56:50.269343    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:56:50.269535    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:50.269549    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:56:50.269605    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:56:50.269682    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:56:50.269697    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:56:50.866700    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:56:50.866772    9278 controller.go:555] controller loop complete\nI0908 02:57:00.868964    9278 controller.go:187] starting controller iteration\nI0908 02:57:00.868990    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:00.869231    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:57:00.869365    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:00.869940    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:57:00.883625    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:00.883695    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:57:00.883709    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:00.883919    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:00.883955    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:00.884010    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:00.884089    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:00.884103    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:57:01.486699    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:01.486772    9278 controller.go:555] controller loop complete\nI0908 02:57:11.488165    9278 controller.go:187] starting controller iteration\nI0908 02:57:11.488190    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:11.488388    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:57:11.488492    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:11.488867    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:57:11.500307    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:11.500381    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:57:11.500396    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:11.500577    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:11.500593    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:11.500643    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:11.500721    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:11.500733    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:57:12.109760    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:12.109842    9278 controller.go:555] controller loop complete\nI0908 02:57:22.111618    9278 controller.go:187] starting controller iteration\nI0908 02:57:22.111644    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:22.111887    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:57:22.112021    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:22.112567    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:57:22.129373    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:22.129448    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:57:22.129463    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:22.129640    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:22.129652    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:22.129702    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:22.129765    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:22.129775    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:57:22.721875    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:22.721945    9278 controller.go:555] controller loop complete\nI0908 02:57:32.723956    9278 controller.go:187] starting controller iteration\nI0908 02:57:32.723980    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:32.724214    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:57:32.724342    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:32.724659    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:57:32.735927    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:32.735991    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:57:32.736008    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:32.736194    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:32.736211    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:32.736262    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:32.736336    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:32.736349    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:57:33.327852    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:33.327920    9278 controller.go:555] controller loop complete\nI0908 02:57:43.034700    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:57:43.152675    9278 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:57:43.215338    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:43.215413    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:43.329720    9278 controller.go:187] starting controller iteration\nI0908 02:57:43.329775    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:43.329989    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:57:43.330102    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:43.330600    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:57:43.343730    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:43.343816    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:57:43.343848    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:43.344155    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:43.344172    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:43.344223    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:43.344329    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:43.344342    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:57:43.936172    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:43.936241    9278 controller.go:555] controller loop complete\nI0908 02:57:53.938328    9278 controller.go:187] starting controller iteration\nI0908 02:57:53.938353    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:53.938617    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:57:53.938744    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:57:53.939271    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:57:53.958941    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:57:53.959024    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:57:53.959040    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:57:53.959239    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:53.959259    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:57:53.959312    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:57:53.959389    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:57:53.959410    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:57:54.567794    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:57:54.567878    9278 controller.go:555] controller loop complete\nI0908 02:58:04.569743    9278 controller.go:187] starting controller iteration\nI0908 02:58:04.569771    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:04.569998    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:58:04.570138    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:04.570676    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:58:04.585188    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:04.585272    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:58:04.585288    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:04.585480    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:04.585495    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:04.585547    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:04.585634    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:04.585648    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:58:05.176209    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:05.176277    9278 controller.go:555] controller loop complete\nI0908 02:58:15.177945    9278 controller.go:187] starting controller iteration\nI0908 02:58:15.177973    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:15.178226    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:58:15.178371    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:15.178736    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:58:15.190270    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:15.190364    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:58:15.190383    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:15.190586    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:15.190601    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:15.190663    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:15.190735    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:15.190749    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:58:15.791268    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:15.791335    9278 controller.go:555] controller loop complete\nI0908 02:58:25.794587    9278 controller.go:187] starting controller iteration\nI0908 02:58:25.794614    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:25.794871    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:58:25.794995    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:25.795402    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:58:25.824403    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:25.824486    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:58:25.824502    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:25.824704    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:25.824716    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:25.824764    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:25.824846    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:25.824857    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:58:26.428918    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:26.428987    9278 controller.go:555] controller loop complete\nI0908 02:58:36.431331    9278 controller.go:187] starting controller iteration\nI0908 02:58:36.431357    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:36.431659    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:58:36.431869    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:36.432366    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:58:36.443893    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:36.443973    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:58:36.443989    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:36.444154    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:36.444169    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:36.444216    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:36.444274    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:36.444290    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:58:37.035421    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:37.035489    9278 controller.go:555] controller loop complete\nI0908 02:58:43.216280    9278 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0908 02:58:43.322605    9278 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0908 02:58:43.360613    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:43.360682    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:47.037115    9278 controller.go:187] starting controller iteration\nI0908 02:58:47.037140    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:47.037350    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:58:47.037459    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:47.037788    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:58:47.052967    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:47.053042    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:58:47.053058    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:47.053299    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:47.053316    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:47.053368    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:47.053449    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:47.053465    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:58:47.648584    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:47.648654    9278 controller.go:555] controller loop complete\nI0908 02:58:57.649953    9278 controller.go:187] starting controller iteration\nI0908 02:58:57.649975    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:57.650191    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:58:57.650308    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:58:57.650646    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:58:57.661791    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:58:57.661891    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:58:57.661911    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:58:57.662074    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:57.662089    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:58:57.662140    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:58:57.662202    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:58:57.662217    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:58:58.246014    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:58:58.246081    9278 controller.go:555] controller loop complete\nI0908 02:59:08.247665    9278 controller.go:187] starting controller iteration\nI0908 02:59:08.247698    9278 controller.go:264] Broadcasting leadership assertion with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:59:08.247995    9278 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > leadership_token:\"MVypN_VwZ5W5I36x6S7Bhg\" healthy:<id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" > > \nI0908 02:59:08.248148    9278 controller.go:293] I am leader with token \"MVypN_VwZ5W5I36x6S7Bhg\"\nI0908 02:59:08.248801    9278 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001]\nI0908 02:59:08.260861    9278 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.56.43:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"WXVnfJEHTzHvFx4g6ALsig\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0908 02:59:08.260953    9278 controller.go:301] etcd cluster members: map[18237550313395906948:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:4001\"],\"ID\":\"18237550313395906948\"}]\nI0908 02:59:08.260969    9278 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\" addresses:\"172.20.56.43\" > \nI0908 02:59:08.261167    9278 etcdserver.go:248] updating hosts: map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:59:08.261184    9278 hosts.go:84] hosts update: primary=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io:[172.20.56.43 172.20.56.43]], final=map[172.20.56.43:[etcd-a.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io]]\nI0908 02:59:08.261240    9278 hosts.go:181] skipping update of unchanged /etc/hosts\nI0908 02:59:08.261311    9278 commands.go:38] not refreshing commands - TTL not hit\nI0908 02:59:08.261327    9278 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0908 02:59:08.860671    9278 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0908 02:59:08.860740    9278 controller.go:555] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-56-43.eu-west-3.compute.internal ====\n==== START logs for container kops-controller of pod kube-system/kops-controller-44znn ====\nI0908 02:54:38.512285       1 deleg.go:130] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\"  \"addr\"=\":0\"\nI0908 02:54:38.515690       1 deleg.go:130] setup \"msg\"=\"starting manager\"  \nI0908 02:54:38.515895       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kops-controller-leader...\nI0908 02:54:38.516268       1 internal.go:383]  \"msg\"=\"starting metrics server\"  \"path\"=\"/metrics\"\nE0908 02:54:38.538631       1 event.go:329] Could not construct reference to: '&v1.Lease{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kops-controller-leader\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"2f86bc2d-6dc7-48cd-ad54-cde4f88c7a9c\", ResourceVersion:\"463\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766666478, loc:(*time.Location)(0x46bbbc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kops-controller\", Operation:\"Update\", APIVersion:\"coordination.k8s.io/v1\", Time:(*v1.Time)(0xc00027d8c0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00027d8d8), Subresource:\"\"}}}, Spec:v1.LeaseSpec{HolderIdentity:(*string)(nil), LeaseDurationSeconds:(*int32)(nil), AcquireTime:(*v1.MicroTime)(nil), RenewTime:(*v1.MicroTime)(nil), LeaseTransitions:(*int32)(nil)}}' due to: 'no kind is registered for the type v1.Lease in scheme \"cmd/kops-controller/main.go:48\"'. Will not report event: 'Normal' 'LeaderElection' 'ip-172-20-56-43.eu-west-3.compute.internal_b38abbeb-1904-47fb-ae1c-35dcefba1fd6 became leader'\nI0908 02:54:38.538730       1 leaderelection.go:258] successfully acquired lease kube-system/kops-controller-leader\nI0908 02:54:38.539092       1 recorder.go:104] events \"msg\"=\"Normal\"  \"message\"=\"ip-172-20-56-43.eu-west-3.compute.internal_b38abbeb-1904-47fb-ae1c-35dcefba1fd6 became leader\" \"object\"={\"kind\":\"ConfigMap\",\"namespace\":\"kube-system\",\"name\":\"kops-controller-leader\",\"uid\":\"fa4aa89f-26fc-4939-ad3b-0d8ddf1861d5\",\"apiVersion\":\"v1\",\"resourceVersion\":\"460\"} \"reason\"=\"LeaderElection\"\nI0908 02:54:38.540091       1 controller.go:165] controller/node \"msg\"=\"Starting EventSource\" \"reconciler group\"=\"\" \"reconciler kind\"=\"Node\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{},\"status\":{\"daemonEndpoints\":{\"kubeletEndpoint\":{\"Port\":0}},\"nodeInfo\":{\"machineID\":\"\",\"systemUUID\":\"\",\"bootID\":\"\",\"kernelVersion\":\"\",\"osImage\":\"\",\"containerRuntimeVersion\":\"\",\"kubeletVersion\":\"\",\"kubeProxyVersion\":\"\",\"operatingSystem\":\"\",\"architecture\":\"\"}}}}\nI0908 02:54:38.540120       1 controller.go:173] controller/node \"msg\"=\"Starting Controller\" \"reconciler group\"=\"\" \"reconciler kind\"=\"Node\" \nI0908 02:54:38.641748       1 controller.go:207] controller/node \"msg\"=\"Starting workers\" \"reconciler group\"=\"\" \"reconciler kind\"=\"Node\" \"worker count\"=1\nI0908 02:54:38.729230       1 node_controller.go:142] sending patch for node \"ip-172-20-56-43.eu-west-3.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"master-eu-west-3a\\\"}}}\"\nI0908 02:54:53.718864       1 server.go:168] bootstrap 172.20.36.200:34204 ip-172-20-36-200.eu-west-3.compute.internal success\nI0908 02:55:03.690699       1 server.go:168] bootstrap 172.20.49.112:33826 ip-172-20-49-112.eu-west-3.compute.internal success\nI0908 02:55:03.992176       1 server.go:168] bootstrap 172.20.51.126:48834 ip-172-20-51-126.eu-west-3.compute.internal success\nI0908 02:55:09.065793       1 server.go:168] bootstrap 172.20.36.148:41814 ip-172-20-36-148.eu-west-3.compute.internal success\nI0908 02:55:30.868366       1 node_controller.go:142] sending patch for node \"ip-172-20-36-200.eu-west-3.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-3a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\nI0908 02:55:40.198537       1 node_controller.go:142] sending patch for node \"ip-172-20-51-126.eu-west-3.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-3a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\nI0908 02:55:40.275570       1 node_controller.go:142] sending patch for node \"ip-172-20-49-112.eu-west-3.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-3a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\nI0908 02:55:45.365894       1 node_controller.go:142] sending patch for node \"ip-172-20-36-148.eu-west-3.compute.internal\": \"{\\\"metadata\\\":{\\\"labels\\\":{\\\"kops.k8s.io/instancegroup\\\":\\\"nodes-eu-west-3a\\\",\\\"kubernetes.io/role\\\":\\\"node\\\",\\\"node-role.kubernetes.io/node\\\":\\\"\\\"}}}\"\n==== END logs for container kops-controller of pod kube-system/kops-controller-44znn ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal ====\nFlag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.\nI0908 02:53:51.610069       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0908 02:53:51.610905       1 flags.go:59] FLAG: --address=\"127.0.0.1\"\nI0908 02:53:51.610915       1 flags.go:59] FLAG: --admission-control=\"[]\"\nI0908 02:53:51.610922       1 flags.go:59] FLAG: --admission-control-config-file=\"\"\nI0908 02:53:51.610927       1 flags.go:59] FLAG: --advertise-address=\"<nil>\"\nI0908 02:53:51.610938       1 flags.go:59] FLAG: --allow-metric-labels=\"[]\"\nI0908 02:53:51.610946       1 flags.go:59] FLAG: --allow-privileged=\"true\"\nI0908 02:53:51.610952       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0908 02:53:51.610957       1 flags.go:59] FLAG: --anonymous-auth=\"false\"\nI0908 02:53:51.610961       1 flags.go:59] FLAG: --api-audiences=\"[kubernetes.svc.default]\"\nI0908 02:53:51.610968       1 flags.go:59] FLAG: --apiserver-count=\"1\"\nI0908 02:53:51.610973       1 flags.go:59] FLAG: --audit-log-batch-buffer-size=\"10000\"\nI0908 02:53:51.610982       1 flags.go:59] FLAG: --audit-log-batch-max-size=\"1\"\nI0908 02:53:51.610986       1 flags.go:59] FLAG: --audit-log-batch-max-wait=\"0s\"\nI0908 02:53:51.610991       1 flags.go:59] FLAG: --audit-log-batch-throttle-burst=\"0\"\nI0908 02:53:51.610996       1 flags.go:59] FLAG: --audit-log-batch-throttle-enable=\"false\"\nI0908 02:53:51.611000       1 flags.go:59] FLAG: --audit-log-batch-throttle-qps=\"0\"\nI0908 02:53:51.611006       1 flags.go:59] FLAG: --audit-log-compress=\"false\"\nI0908 02:53:51.611010       1 flags.go:59] FLAG: --audit-log-format=\"json\"\nI0908 02:53:51.611018       1 flags.go:59] FLAG: --audit-log-maxage=\"0\"\nI0908 02:53:51.611022       1 flags.go:59] FLAG: --audit-log-maxbackup=\"0\"\nI0908 02:53:51.611026       1 flags.go:59] FLAG: --audit-log-maxsize=\"0\"\nI0908 02:53:51.611030       1 flags.go:59] FLAG: --audit-log-mode=\"blocking\"\nI0908 02:53:51.611035       1 flags.go:59] FLAG: --audit-log-path=\"\"\nI0908 02:53:51.611039       1 flags.go:59] FLAG: --audit-log-truncate-enabled=\"false\"\nI0908 02:53:51.611043       1 flags.go:59] FLAG: --audit-log-truncate-max-batch-size=\"10485760\"\nI0908 02:53:51.611053       1 flags.go:59] FLAG: --audit-log-truncate-max-event-size=\"102400\"\nI0908 02:53:51.611059       1 flags.go:59] FLAG: --audit-log-version=\"audit.k8s.io/v1\"\nI0908 02:53:51.611063       1 flags.go:59] FLAG: --audit-policy-file=\"\"\nI0908 02:53:51.611067       1 flags.go:59] FLAG: --audit-webhook-batch-buffer-size=\"10000\"\nI0908 02:53:51.611071       1 flags.go:59] FLAG: --audit-webhook-batch-initial-backoff=\"10s\"\nI0908 02:53:51.611075       1 flags.go:59] FLAG: --audit-webhook-batch-max-size=\"400\"\nI0908 02:53:51.611080       1 flags.go:59] FLAG: --audit-webhook-batch-max-wait=\"30s\"\nI0908 02:53:51.611088       1 flags.go:59] FLAG: --audit-webhook-batch-throttle-burst=\"15\"\nI0908 02:53:51.611092       1 flags.go:59] FLAG: --audit-webhook-batch-throttle-enable=\"true\"\nI0908 02:53:51.611096       1 flags.go:59] FLAG: --audit-webhook-batch-throttle-qps=\"10\"\nI0908 02:53:51.611101       1 flags.go:59] FLAG: --audit-webhook-config-file=\"\"\nI0908 02:53:51.611105       1 flags.go:59] FLAG: --audit-webhook-initial-backoff=\"10s\"\nI0908 02:53:51.611109       1 flags.go:59] FLAG: --audit-webhook-mode=\"batch\"\nI0908 02:53:51.611113       1 flags.go:59] FLAG: --audit-webhook-truncate-enabled=\"false\"\nI0908 02:53:51.611121       1 flags.go:59] FLAG: --audit-webhook-truncate-max-batch-size=\"10485760\"\nI0908 02:53:51.611126       1 flags.go:59] FLAG: --audit-webhook-truncate-max-event-size=\"102400\"\nI0908 02:53:51.611130       1 flags.go:59] FLAG: --audit-webhook-version=\"audit.k8s.io/v1\"\nI0908 02:53:51.611134       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"2m0s\"\nI0908 02:53:51.611139       1 flags.go:59] FLAG: --authentication-token-webhook-config-file=\"\"\nI0908 02:53:51.611143       1 flags.go:59] FLAG: --authentication-token-webhook-version=\"v1beta1\"\nI0908 02:53:51.611147       1 flags.go:59] FLAG: --authorization-mode=\"[Node,RBAC]\"\nI0908 02:53:51.611155       1 flags.go:59] FLAG: --authorization-policy-file=\"\"\nI0908 02:53:51.611163       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"5m0s\"\nI0908 02:53:51.611168       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"30s\"\nI0908 02:53:51.611172       1 flags.go:59] FLAG: --authorization-webhook-config-file=\"\"\nI0908 02:53:51.611176       1 flags.go:59] FLAG: --authorization-webhook-version=\"v1beta1\"\nI0908 02:53:51.611180       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0908 02:53:51.611185       1 flags.go:59] FLAG: --cert-dir=\"/var/run/kubernetes\"\nI0908 02:53:51.611189       1 flags.go:59] FLAG: --client-ca-file=\"/srv/kubernetes/ca.crt\"\nI0908 02:53:51.611198       1 flags.go:59] FLAG: --cloud-config=\"/etc/kubernetes/cloud.config\"\nI0908 02:53:51.611203       1 flags.go:59] FLAG: --cloud-provider=\"aws\"\nI0908 02:53:51.611207       1 flags.go:59] FLAG: --cloud-provider-gce-l7lb-src-cidrs=\"130.211.0.0/22,35.191.0.0/16\"\nI0908 02:53:51.611214       1 flags.go:59] FLAG: --cloud-provider-gce-lb-src-cidrs=\"130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16\"\nI0908 02:53:51.611220       1 flags.go:59] FLAG: --contention-profiling=\"false\"\nI0908 02:53:51.611225       1 flags.go:59] FLAG: --cors-allowed-origins=\"[]\"\nI0908 02:53:51.611234       1 flags.go:59] FLAG: --default-not-ready-toleration-seconds=\"300\"\nI0908 02:53:51.611238       1 flags.go:59] FLAG: --default-unreachable-toleration-seconds=\"300\"\nI0908 02:53:51.611243       1 flags.go:59] FLAG: --default-watch-cache-size=\"100\"\nI0908 02:53:51.611247       1 flags.go:59] FLAG: --delete-collection-workers=\"1\"\nI0908 02:53:51.611252       1 flags.go:59] FLAG: --deserialization-cache-size=\"0\"\nI0908 02:53:51.611257       1 flags.go:59] FLAG: --disable-admission-plugins=\"[]\"\nI0908 02:53:51.611263       1 flags.go:59] FLAG: --disabled-metrics=\"[]\"\nI0908 02:53:51.611271       1 flags.go:59] FLAG: --egress-selector-config-file=\"\"\nI0908 02:53:51.611275       1 flags.go:59] FLAG: --enable-admission-plugins=\"[NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota]\"\nI0908 02:53:51.611287       1 flags.go:59] FLAG: --enable-aggregator-routing=\"false\"\nI0908 02:53:51.611292       1 flags.go:59] FLAG: --enable-bootstrap-token-auth=\"false\"\nI0908 02:53:51.611296       1 flags.go:59] FLAG: --enable-garbage-collector=\"true\"\nI0908 02:53:51.611300       1 flags.go:59] FLAG: --enable-logs-handler=\"true\"\nI0908 02:53:51.611304       1 flags.go:59] FLAG: --enable-priority-and-fairness=\"true\"\nI0908 02:53:51.611308       1 flags.go:59] FLAG: --enable-swagger-ui=\"false\"\nI0908 02:53:51.611317       1 flags.go:59] FLAG: --encryption-provider-config=\"\"\nI0908 02:53:51.611321       1 flags.go:59] FLAG: --endpoint-reconciler-type=\"lease\"\nI0908 02:53:51.611325       1 flags.go:59] FLAG: --etcd-cafile=\"/srv/kubernetes/kube-apiserver/etcd-ca.crt\"\nI0908 02:53:51.611330       1 flags.go:59] FLAG: --etcd-certfile=\"/srv/kubernetes/kube-apiserver/etcd-client.crt\"\nI0908 02:53:51.611335       1 flags.go:59] FLAG: --etcd-compaction-interval=\"5m0s\"\nI0908 02:53:51.611339       1 flags.go:59] FLAG: --etcd-count-metric-poll-period=\"1m0s\"\nI0908 02:53:51.611343       1 flags.go:59] FLAG: --etcd-db-metric-poll-interval=\"30s\"\nI0908 02:53:51.611351       1 flags.go:59] FLAG: --etcd-healthcheck-timeout=\"2s\"\nI0908 02:53:51.611355       1 flags.go:59] FLAG: --etcd-keyfile=\"/srv/kubernetes/kube-apiserver/etcd-client.key\"\nI0908 02:53:51.611360       1 flags.go:59] FLAG: --etcd-prefix=\"/registry\"\nI0908 02:53:51.611364       1 flags.go:59] FLAG: --etcd-servers=\"[https://127.0.0.1:4001]\"\nI0908 02:53:51.611370       1 flags.go:59] FLAG: --etcd-servers-overrides=\"[/events#https://127.0.0.1:4002]\"\nI0908 02:53:51.611377       1 flags.go:59] FLAG: --event-ttl=\"1h0m0s\"\nI0908 02:53:51.611381       1 flags.go:59] FLAG: --experimental-encryption-provider-config=\"\"\nI0908 02:53:51.611389       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0908 02:53:51.611394       1 flags.go:59] FLAG: --external-hostname=\"\"\nI0908 02:53:51.611398       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0908 02:53:51.611404       1 flags.go:59] FLAG: --goaway-chance=\"0\"\nI0908 02:53:51.611409       1 flags.go:59] FLAG: --help=\"false\"\nI0908 02:53:51.611414       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0908 02:53:51.611418       1 flags.go:59] FLAG: --identity-lease-duration-seconds=\"3600\"\nI0908 02:53:51.611426       1 flags.go:59] FLAG: --identity-lease-renew-interval-seconds=\"10\"\nI0908 02:53:51.611430       1 flags.go:59] FLAG: --insecure-bind-address=\"127.0.0.1\"\nI0908 02:53:51.611435       1 flags.go:59] FLAG: --insecure-port=\"0\"\nI0908 02:53:51.611439       1 flags.go:59] FLAG: --kubelet-certificate-authority=\"\"\nI0908 02:53:51.611443       1 flags.go:59] FLAG: --kubelet-client-certificate=\"/srv/kubernetes/kube-apiserver/kubelet-api.crt\"\nI0908 02:53:51.611448       1 flags.go:59] FLAG: --kubelet-client-key=\"/srv/kubernetes/kube-apiserver/kubelet-api.key\"\nI0908 02:53:51.611453       1 flags.go:59] FLAG: --kubelet-https=\"true\"\nI0908 02:53:51.611457       1 flags.go:59] FLAG: --kubelet-port=\"10250\"\nI0908 02:53:51.611468       1 flags.go:59] FLAG: --kubelet-preferred-address-types=\"[InternalIP,Hostname,ExternalIP]\"\nI0908 02:53:51.611474       1 flags.go:59] FLAG: --kubelet-read-only-port=\"10255\"\nI0908 02:53:51.611478       1 flags.go:59] FLAG: --kubelet-timeout=\"5s\"\nI0908 02:53:51.611482       1 flags.go:59] FLAG: --kubernetes-service-node-port=\"0\"\nI0908 02:53:51.611486       1 flags.go:59] FLAG: --lease-reuse-duration-seconds=\"60\"\nI0908 02:53:51.611490       1 flags.go:59] FLAG: --livez-grace-period=\"0s\"\nI0908 02:53:51.611494       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0908 02:53:51.611505       1 flags.go:59] FLAG: --log-dir=\"\"\nI0908 02:53:51.611509       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-apiserver.log\"\nI0908 02:53:51.611514       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0908 02:53:51.611518       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0908 02:53:51.611523       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0908 02:53:51.611527       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0908 02:53:51.611535       1 flags.go:59] FLAG: --master-service-namespace=\"default\"\nI0908 02:53:51.611540       1 flags.go:59] FLAG: --max-connection-bytes-per-sec=\"0\"\nI0908 02:53:51.611544       1 flags.go:59] FLAG: --max-mutating-requests-inflight=\"200\"\nI0908 02:53:51.611548       1 flags.go:59] FLAG: --max-requests-inflight=\"400\"\nI0908 02:53:51.611552       1 flags.go:59] FLAG: --min-request-timeout=\"1800\"\nI0908 02:53:51.611557       1 flags.go:59] FLAG: --oidc-ca-file=\"\"\nI0908 02:53:51.611561       1 flags.go:59] FLAG: --oidc-client-id=\"\"\nI0908 02:53:51.611565       1 flags.go:59] FLAG: --oidc-groups-claim=\"\"\nI0908 02:53:51.611573       1 flags.go:59] FLAG: --oidc-groups-prefix=\"\"\nI0908 02:53:51.611577       1 flags.go:59] FLAG: --oidc-issuer-url=\"\"\nI0908 02:53:51.611581       1 flags.go:59] FLAG: --oidc-required-claim=\"\"\nI0908 02:53:51.611587       1 flags.go:59] FLAG: --oidc-signing-algs=\"[RS256]\"\nI0908 02:53:51.611593       1 flags.go:59] FLAG: --oidc-username-claim=\"sub\"\nI0908 02:53:51.611597       1 flags.go:59] FLAG: --oidc-username-prefix=\"\"\nI0908 02:53:51.611601       1 flags.go:59] FLAG: --one-output=\"false\"\nI0908 02:53:51.611609       1 flags.go:59] FLAG: --permit-address-sharing=\"false\"\nI0908 02:53:51.611613       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0908 02:53:51.611617       1 flags.go:59] FLAG: --port=\"0\"\nI0908 02:53:51.611621       1 flags.go:59] FLAG: --profiling=\"true\"\nI0908 02:53:51.611625       1 flags.go:59] FLAG: --proxy-client-cert-file=\"/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt\"\nI0908 02:53:51.611631       1 flags.go:59] FLAG: --proxy-client-key-file=\"/srv/kubernetes/kube-apiserver/apiserver-aggregator.key\"\nI0908 02:53:51.611636       1 flags.go:59] FLAG: --request-timeout=\"1m0s\"\nI0908 02:53:51.611644       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[aggregator]\"\nI0908 02:53:51.611649       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\"\nI0908 02:53:51.611655       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[X-Remote-Extra-]\"\nI0908 02:53:51.611662       1 flags.go:59] FLAG: --requestheader-group-headers=\"[X-Remote-Group]\"\nI0908 02:53:51.611668       1 flags.go:59] FLAG: --requestheader-username-headers=\"[X-Remote-User]\"\nI0908 02:53:51.611674       1 flags.go:59] FLAG: --runtime-config=\"\"\nI0908 02:53:51.611680       1 flags.go:59] FLAG: --secure-port=\"443\"\nI0908 02:53:51.611688       1 flags.go:59] FLAG: --service-account-api-audiences=\"[kubernetes.svc.default]\"\nI0908 02:53:51.611693       1 flags.go:59] FLAG: --service-account-extend-token-expiration=\"true\"\nI0908 02:53:51.611697       1 flags.go:59] FLAG: --service-account-issuer=\"https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\"\nI0908 02:53:51.611702       1 flags.go:59] FLAG: --service-account-jwks-uri=\"https://api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io/openid/v1/jwks\"\nI0908 02:53:51.611709       1 flags.go:59] FLAG: --service-account-key-file=\"[/srv/kubernetes/kube-apiserver/service-account.pub]\"\nI0908 02:53:51.611718       1 flags.go:59] FLAG: --service-account-lookup=\"true\"\nI0908 02:53:51.611722       1 flags.go:59] FLAG: --service-account-max-token-expiration=\"0s\"\nI0908 02:53:51.611729       1 flags.go:59] FLAG: --service-account-signing-key-file=\"/srv/kubernetes/kube-apiserver/service-account.key\"\nI0908 02:53:51.611735       1 flags.go:59] FLAG: --service-cluster-ip-range=\"100.64.0.0/13\"\nI0908 02:53:51.611739       1 flags.go:59] FLAG: --service-node-port-range=\"30000-32767\"\nI0908 02:53:51.611746       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0908 02:53:51.611750       1 flags.go:59] FLAG: --shutdown-delay-duration=\"0s\"\nI0908 02:53:51.611755       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0908 02:53:51.611759       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0908 02:53:51.611767       1 flags.go:59] FLAG: --ssh-keyfile=\"\"\nI0908 02:53:51.611771       1 flags.go:59] FLAG: --ssh-user=\"\"\nI0908 02:53:51.611775       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0908 02:53:51.611779       1 flags.go:59] FLAG: --storage-backend=\"etcd3\"\nI0908 02:53:51.611783       1 flags.go:59] FLAG: --storage-media-type=\"application/vnd.kubernetes.protobuf\"\nI0908 02:53:51.611788       1 flags.go:59] FLAG: --strict-transport-security-directives=\"[]\"\nI0908 02:53:51.611793       1 flags.go:59] FLAG: --target-ram-mb=\"0\"\nI0908 02:53:51.611800       1 flags.go:59] FLAG: --tls-cert-file=\"/srv/kubernetes/kube-apiserver/server.crt\"\nI0908 02:53:51.611805       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0908 02:53:51.611811       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0908 02:53:51.611815       1 flags.go:59] FLAG: --tls-private-key-file=\"/srv/kubernetes/kube-apiserver/server.key\"\nI0908 02:53:51.611835       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0908 02:53:51.611841       1 flags.go:59] FLAG: --token-auth-file=\"\"\nI0908 02:53:51.611845       1 flags.go:59] FLAG: --v=\"2\"\nI0908 02:53:51.611853       1 flags.go:59] FLAG: --version=\"false\"\nI0908 02:53:51.611859       1 flags.go:59] FLAG: --vmodule=\"\"\nI0908 02:53:51.611864       1 flags.go:59] FLAG: --watch-cache=\"true\"\nI0908 02:53:51.611868       1 flags.go:59] FLAG: --watch-cache-sizes=\"[]\"\nI0908 02:53:51.612199       1 server.go:629] external host was not specified, using 172.20.56.43\nI0908 02:53:51.613630       1 server.go:181] Version: v1.21.4\nI0908 02:53:51.615079       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"serving-cert::/srv/kubernetes/kube-apiserver/server.crt::/srv/kubernetes/kube-apiserver/server.key\"\nI0908 02:53:52.054270       1 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for \"client-ca-bundle::/srv/kubernetes/ca.crt\"\nI0908 02:53:52.054383       1 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for \"request-header::/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\"\nI0908 02:53:52.054783       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer\nW0908 02:53:52.055314       1 admission.go:78] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.\nI0908 02:53:52.055749       1 plugins.go:158] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0908 02:53:52.055760       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.\nI0908 02:53:52.055790       1 apf_controller.go:195] NewTestableController \"Controller\" with serverConcurrencyLimit=600, requestWaitLimit=15s, name=Controller, asFieldManager=\"api-priority-and-fairness-config-consumer-v1\"\nI0908 02:53:52.056033       1 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for \"client-ca-bundle::/srv/kubernetes/ca.crt\"\nI0908 02:53:52.056160       1 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for \"request-header::/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\"\nW0908 02:53:52.056523       1 admission.go:78] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.\nI0908 02:53:52.056791       1 plugins.go:158] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0908 02:53:52.056804       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.\nI0908 02:53:52.058730       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:53:52.058774       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nW0908 02:53:52.059076       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nI0908 02:53:53.054632       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:53:53.054667       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nW0908 02:53:53.054909       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nW0908 02:53:53.060274       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nW0908 02:53:54.055459       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nW0908 02:53:54.636242       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nW0908 02:53:55.607429       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nW0908 02:53:57.263213       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nW0908 02:53:58.109270       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:4001  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:4001: connect: connection refused\". Reconnecting...\nI0908 02:54:01.659696       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.659728       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.668601       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:54:01.668651       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:54:01.668662       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:54:01.668857       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0007ab2b0, {CONNECTING <nil>}\nI0908 02:54:01.676247       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0007ab2b0, {READY <nil>}\nI0908 02:54:01.678047       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:54:01.678182       1 store.go:1428] Monitoring customresourcedefinitions.apiextensions.k8s.io count at <storage-prefix>//apiextensions.k8s.io/customresourcedefinitions\nI0908 02:54:01.678533       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.678554       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.682770       1 cacher.go:405] cacher (*apiextensions.CustomResourceDefinition): initialized\nI0908 02:54:01.689180       1 store.go:1428] Monitoring customresourcedefinitions.apiextensions.k8s.io count at <storage-prefix>//apiextensions.k8s.io/customresourcedefinitions\nI0908 02:54:01.704014       1 cacher.go:405] cacher (*apiextensions.CustomResourceDefinition): initialized\nI0908 02:54:01.728716       1 instance.go:283] Using reconciler: lease\nI0908 02:54:01.729548       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.729583       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.741592       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.741620       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.748816       1 store.go:1428] Monitoring podtemplates count at <storage-prefix>//podtemplates\nI0908 02:54:01.749310       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.749332       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4002  <nil> 0 <nil>}]\nI0908 02:54:01.749846       1 cacher.go:405] cacher (*core.PodTemplate): initialized\nI0908 02:54:01.755427       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.755448       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4002  <nil> 0 <nil>}]\nI0908 02:54:01.760762       1 store.go:1428] Monitoring events count at <storage-prefix>//events\nI0908 02:54:01.760931       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:54:01.760963       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:54:01.760972       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:54:01.761103       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0010901f0, {CONNECTING <nil>}\nI0908 02:54:01.761113       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.761128       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.770080       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0010901f0, {READY <nil>}\nI0908 02:54:01.770217       1 store.go:1428] Monitoring limitranges count at <storage-prefix>//limitranges\nI0908 02:54:01.770699       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.770718       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.771066       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:54:01.776895       1 cacher.go:405] cacher (*core.LimitRange): initialized\nI0908 02:54:01.777353       1 store.go:1428] Monitoring resourcequotas count at <storage-prefix>//resourcequotas\nI0908 02:54:01.777728       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.777752       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.778921       1 cacher.go:405] cacher (*core.ResourceQuota): initialized\nI0908 02:54:01.783567       1 store.go:1428] Monitoring secrets count at <storage-prefix>//secrets\nI0908 02:54:01.783965       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.783985       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.785121       1 cacher.go:405] cacher (*core.Secret): initialized\nI0908 02:54:01.790136       1 store.go:1428] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes\nI0908 02:54:01.790619       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.790638       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.791080       1 cacher.go:405] cacher (*core.PersistentVolume): initialized\nI0908 02:54:01.796269       1 store.go:1428] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims\nI0908 02:54:01.796683       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.796704       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.797540       1 cacher.go:405] cacher (*core.PersistentVolumeClaim): initialized\nI0908 02:54:01.803582       1 store.go:1428] Monitoring configmaps count at <storage-prefix>//configmaps\nI0908 02:54:01.804031       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.804050       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.804496       1 cacher.go:405] cacher (*core.ConfigMap): initialized\nI0908 02:54:01.812172       1 store.go:1428] Monitoring namespaces count at <storage-prefix>//namespaces\nI0908 02:54:01.812579       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.812598       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.813844       1 cacher.go:405] cacher (*core.Namespace): initialized\nI0908 02:54:01.818360       1 store.go:1428] Monitoring endpoints count at <storage-prefix>//services/endpoints\nI0908 02:54:01.819088       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.819117       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.826658       1 cacher.go:405] cacher (*core.Endpoints): initialized\nI0908 02:54:01.830486       1 store.go:1428] Monitoring nodes count at <storage-prefix>//minions\nI0908 02:54:01.831428       1 cacher.go:405] cacher (*core.Node): initialized\nI0908 02:54:01.832392       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.832414       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.837473       1 store.go:1428] Monitoring pods count at <storage-prefix>//pods\nI0908 02:54:01.837816       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.837868       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.840689       1 cacher.go:405] cacher (*core.Pod): initialized\nI0908 02:54:01.843627       1 store.go:1428] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts\nI0908 02:54:01.843953       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.843971       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.849189       1 cacher.go:405] cacher (*core.ServiceAccount): initialized\nI0908 02:54:01.849999       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.850019       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.855547       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.855569       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.860649       1 store.go:1428] Monitoring replicationcontrollers count at <storage-prefix>//controllers\nI0908 02:54:01.861041       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.861064       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.862087       1 cacher.go:405] cacher (*core.ReplicationController): initialized\nI0908 02:54:01.866501       1 store.go:1428] Monitoring services count at <storage-prefix>//services/specs\nI0908 02:54:01.866544       1 rest.go:130] the default service ipfamily for this cluster is: IPv4\nI0908 02:54:01.872210       1 cacher.go:405] cacher (*core.Service): initialized\nI0908 02:54:01.929913       1 instance.go:586] Skipping disabled API group \"internal.apiserver.k8s.io\".\nI0908 02:54:01.929974       1 instance.go:607] Enabling API group \"authentication.k8s.io\".\nI0908 02:54:01.930568       1 instance.go:607] Enabling API group \"authorization.k8s.io\".\nI0908 02:54:01.931057       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.931089       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.936578       1 store.go:1428] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers\nI0908 02:54:01.936986       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.937015       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.938199       1 cacher.go:405] cacher (*autoscaling.HorizontalPodAutoscaler): initialized\nI0908 02:54:01.943137       1 store.go:1428] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers\nI0908 02:54:01.943536       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.943555       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.947177       1 cacher.go:405] cacher (*autoscaling.HorizontalPodAutoscaler): initialized\nI0908 02:54:01.950467       1 store.go:1428] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers\nI0908 02:54:01.950545       1 instance.go:607] Enabling API group \"autoscaling\".\nI0908 02:54:01.950949       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.950976       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.952005       1 cacher.go:405] cacher (*autoscaling.HorizontalPodAutoscaler): initialized\nI0908 02:54:01.957185       1 store.go:1428] Monitoring jobs.batch count at <storage-prefix>//jobs\nI0908 02:54:01.957587       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.957606       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.958777       1 cacher.go:405] cacher (*batch.Job): initialized\nI0908 02:54:01.963553       1 store.go:1428] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs\nI0908 02:54:01.963911       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.963937       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.964936       1 cacher.go:405] cacher (*batch.CronJob): initialized\nI0908 02:54:01.969994       1 store.go:1428] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs\nI0908 02:54:01.970081       1 instance.go:607] Enabling API group \"batch\".\nI0908 02:54:01.970488       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.970507       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.971565       1 cacher.go:405] cacher (*batch.CronJob): initialized\nI0908 02:54:01.979711       1 store.go:1428] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests\nI0908 02:54:01.980082       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.980095       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.981268       1 cacher.go:405] cacher (*certificates.CertificateSigningRequest): initialized\nI0908 02:54:01.985892       1 store.go:1428] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests\nI0908 02:54:01.985947       1 instance.go:607] Enabling API group \"certificates.k8s.io\".\nI0908 02:54:01.986328       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.986347       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.987238       1 cacher.go:405] cacher (*certificates.CertificateSigningRequest): initialized\nI0908 02:54:01.992124       1 store.go:1428] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases\nI0908 02:54:01.992505       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.992523       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:01.993695       1 cacher.go:405] cacher (*coordination.Lease): initialized\nI0908 02:54:01.998259       1 store.go:1428] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases\nI0908 02:54:01.998297       1 instance.go:607] Enabling API group \"coordination.k8s.io\".\nI0908 02:54:01.999089       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:01.999118       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.006185       1 cacher.go:405] cacher (*coordination.Lease): initialized\nI0908 02:54:02.013277       1 store.go:1428] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices\nI0908 02:54:02.013883       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.013909       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.014120       1 cacher.go:405] cacher (*discovery.EndpointSlice): initialized\nI0908 02:54:02.021240       1 store.go:1428] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices\nI0908 02:54:02.021285       1 instance.go:607] Enabling API group \"discovery.k8s.io\".\nI0908 02:54:02.021699       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.021719       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.024766       1 cacher.go:405] cacher (*discovery.EndpointSlice): initialized\nI0908 02:54:02.027733       1 store.go:1428] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress\nI0908 02:54:02.027785       1 instance.go:607] Enabling API group \"extensions\".\nI0908 02:54:02.028212       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.028236       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.029225       1 cacher.go:405] cacher (*networking.Ingress): initialized\nI0908 02:54:02.033912       1 store.go:1428] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies\nI0908 02:54:02.034417       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.034437       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.034867       1 cacher.go:405] cacher (*networking.NetworkPolicy): initialized\nI0908 02:54:02.040275       1 store.go:1428] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress\nI0908 02:54:02.040682       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.040702       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.041659       1 cacher.go:405] cacher (*networking.Ingress): initialized\nI0908 02:54:02.046531       1 store.go:1428] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses\nI0908 02:54:02.046996       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.047013       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.047974       1 cacher.go:405] cacher (*networking.IngressClass): initialized\nI0908 02:54:02.052765       1 store.go:1428] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress\nI0908 02:54:02.053181       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.053203       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.054166       1 cacher.go:405] cacher (*networking.Ingress): initialized\nI0908 02:54:02.060001       1 store.go:1428] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses\nI0908 02:54:02.060073       1 instance.go:607] Enabling API group \"networking.k8s.io\".\nI0908 02:54:02.060611       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.060632       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.061754       1 cacher.go:405] cacher (*networking.IngressClass): initialized\nI0908 02:54:02.066168       1 store.go:1428] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses\nI0908 02:54:02.066554       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.066576       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.067557       1 cacher.go:405] cacher (*node.RuntimeClass): initialized\nI0908 02:54:02.076120       1 store.go:1428] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses\nI0908 02:54:02.076157       1 instance.go:607] Enabling API group \"node.k8s.io\".\nI0908 02:54:02.076600       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.076616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.079368       1 cacher.go:405] cacher (*node.RuntimeClass): initialized\nI0908 02:54:02.082303       1 store.go:1428] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets\nI0908 02:54:02.082667       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.082687       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.083676       1 cacher.go:405] cacher (*policy.PodDisruptionBudget): initialized\nI0908 02:54:02.088535       1 store.go:1428] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy\nI0908 02:54:02.088902       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.088922       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.089926       1 cacher.go:405] cacher (*policy.PodSecurityPolicy): initialized\nI0908 02:54:02.097802       1 store.go:1428] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets\nI0908 02:54:02.097869       1 instance.go:607] Enabling API group \"policy\".\nI0908 02:54:02.098137       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.098156       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.100921       1 cacher.go:405] cacher (*policy.PodDisruptionBudget): initialized\nI0908 02:54:02.103863       1 store.go:1428] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles\nI0908 02:54:02.104230       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.104250       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.109176       1 cacher.go:405] cacher (*rbac.Role): initialized\nI0908 02:54:02.109687       1 store.go:1428] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings\nI0908 02:54:02.110042       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.110054       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.111970       1 cacher.go:405] cacher (*rbac.RoleBinding): initialized\nI0908 02:54:02.115587       1 store.go:1428] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles\nI0908 02:54:02.115993       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.116011       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.116749       1 cacher.go:405] cacher (*rbac.ClusterRole): initialized\nI0908 02:54:02.121909       1 store.go:1428] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings\nI0908 02:54:02.122215       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.122235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.123177       1 cacher.go:405] cacher (*rbac.ClusterRoleBinding): initialized\nI0908 02:54:02.127971       1 store.go:1428] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles\nI0908 02:54:02.128313       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.128333       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.129394       1 cacher.go:405] cacher (*rbac.Role): initialized\nI0908 02:54:02.133930       1 store.go:1428] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings\nI0908 02:54:02.134206       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.134226       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.135133       1 cacher.go:405] cacher (*rbac.RoleBinding): initialized\nI0908 02:54:02.143456       1 store.go:1428] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles\nI0908 02:54:02.143836       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.143854       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.146610       1 cacher.go:405] cacher (*rbac.ClusterRole): initialized\nI0908 02:54:02.149696       1 store.go:1428] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings\nI0908 02:54:02.149789       1 instance.go:607] Enabling API group \"rbac.authorization.k8s.io\".\nI0908 02:54:02.151444       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.151467       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.151633       1 cacher.go:405] cacher (*rbac.ClusterRoleBinding): initialized\nI0908 02:54:02.162165       1 store.go:1428] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses\nI0908 02:54:02.162547       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.162567       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.163477       1 cacher.go:405] cacher (*scheduling.PriorityClass): initialized\nI0908 02:54:02.168207       1 store.go:1428] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses\nI0908 02:54:02.168245       1 instance.go:607] Enabling API group \"scheduling.k8s.io\".\nI0908 02:54:02.168682       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.168702       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.169629       1 cacher.go:405] cacher (*scheduling.PriorityClass): initialized\nI0908 02:54:02.174418       1 store.go:1428] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses\nI0908 02:54:02.174750       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.174770       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.179665       1 cacher.go:405] cacher (*storage.StorageClass): initialized\nI0908 02:54:02.180219       1 store.go:1428] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments\nI0908 02:54:02.180640       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.180652       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.183769       1 cacher.go:405] cacher (*storage.VolumeAttachment): initialized\nI0908 02:54:02.186613       1 store.go:1428] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes\nI0908 02:54:02.187062       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.187080       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.187593       1 cacher.go:405] cacher (*storage.CSINode): initialized\nI0908 02:54:02.192711       1 store.go:1428] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers\nI0908 02:54:02.192983       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.193002       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.194013       1 cacher.go:405] cacher (*storage.CSIDriver): initialized\nI0908 02:54:02.199013       1 store.go:1428] Monitoring csistoragecapacities.storage.k8s.io count at <storage-prefix>//csistoragecapacities\nI0908 02:54:02.199315       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.199335       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.200361       1 cacher.go:405] cacher (*storage.CSIStorageCapacity): initialized\nI0908 02:54:02.205476       1 store.go:1428] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses\nI0908 02:54:02.205807       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.205845       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.206812       1 cacher.go:405] cacher (*storage.StorageClass): initialized\nI0908 02:54:02.211707       1 store.go:1428] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments\nI0908 02:54:02.212053       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.212065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.213078       1 cacher.go:405] cacher (*storage.VolumeAttachment): initialized\nI0908 02:54:02.218165       1 store.go:1428] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes\nI0908 02:54:02.218510       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.218532       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.219814       1 cacher.go:405] cacher (*storage.CSINode): initialized\nI0908 02:54:02.229223       1 store.go:1428] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers\nI0908 02:54:02.229325       1 instance.go:607] Enabling API group \"storage.k8s.io\".\nI0908 02:54:02.229653       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.229679       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.230640       1 cacher.go:405] cacher (*storage.CSIDriver): initialized\nI0908 02:54:02.235487       1 store.go:1428] Monitoring flowschemas.flowcontrol.apiserver.k8s.io count at <storage-prefix>//flowschemas\nI0908 02:54:02.235890       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.235910       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.236883       1 cacher.go:405] cacher (*flowcontrol.FlowSchema): initialized\nI0908 02:54:02.241708       1 store.go:1428] Monitoring prioritylevelconfigurations.flowcontrol.apiserver.k8s.io count at <storage-prefix>//prioritylevelconfigurations\nI0908 02:54:02.241765       1 instance.go:607] Enabling API group \"flowcontrol.apiserver.k8s.io\".\nI0908 02:54:02.242198       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.242224       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.243219       1 cacher.go:405] cacher (*flowcontrol.PriorityLevelConfiguration): initialized\nI0908 02:54:02.248310       1 store.go:1428] Monitoring deployments.apps count at <storage-prefix>//deployments\nI0908 02:54:02.248691       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.248710       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.249896       1 cacher.go:405] cacher (*apps.Deployment): initialized\nI0908 02:54:02.254669       1 store.go:1428] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets\nI0908 02:54:02.255132       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.255160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.255588       1 cacher.go:405] cacher (*apps.StatefulSet): initialized\nI0908 02:54:02.261348       1 store.go:1428] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets\nI0908 02:54:02.261689       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.261710       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.262654       1 cacher.go:405] cacher (*apps.DaemonSet): initialized\nI0908 02:54:02.267485       1 store.go:1428] Monitoring replicasets.apps count at <storage-prefix>//replicasets\nI0908 02:54:02.267858       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.267877       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.272629       1 cacher.go:405] cacher (*apps.ReplicaSet): initialized\nI0908 02:54:02.273135       1 store.go:1428] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions\nI0908 02:54:02.273266       1 instance.go:607] Enabling API group \"apps\".\nI0908 02:54:02.273646       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.273665       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.274619       1 cacher.go:405] cacher (*apps.ControllerRevision): initialized\nI0908 02:54:02.282354       1 store.go:1428] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations\nI0908 02:54:02.287570       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.287597       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.288972       1 cacher.go:405] cacher (*admissionregistration.ValidatingWebhookConfiguration): initialized\nI0908 02:54:02.300954       1 store.go:1428] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations\nI0908 02:54:02.301312       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.301332       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.302314       1 cacher.go:405] cacher (*admissionregistration.MutatingWebhookConfiguration): initialized\nI0908 02:54:02.307312       1 store.go:1428] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations\nI0908 02:54:02.307673       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.307695       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.308618       1 cacher.go:405] cacher (*admissionregistration.ValidatingWebhookConfiguration): initialized\nI0908 02:54:02.313481       1 store.go:1428] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations\nI0908 02:54:02.313546       1 instance.go:607] Enabling API group \"admissionregistration.k8s.io\".\nI0908 02:54:02.313934       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.313954       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4002  <nil> 0 <nil>}]\nI0908 02:54:02.314879       1 cacher.go:405] cacher (*admissionregistration.MutatingWebhookConfiguration): initialized\nI0908 02:54:02.319905       1 store.go:1428] Monitoring events count at <storage-prefix>//events\nI0908 02:54:02.320260       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.320281       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4002  <nil> 0 <nil>}]\nI0908 02:54:02.325738       1 store.go:1428] Monitoring events count at <storage-prefix>//events\nI0908 02:54:02.325785       1 instance.go:607] Enabling API group \"events.k8s.io\".\nW0908 02:54:02.490496       1 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.\nW0908 02:54:02.499062       1 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.\nW0908 02:54:02.501955       1 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.\nW0908 02:54:02.507119       1 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.\nW0908 02:54:02.509196       1 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.\nW0908 02:54:02.513083       1 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.\nW0908 02:54:02.513100       1 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.\nW0908 02:54:02.520342       1 admission.go:78] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.\nI0908 02:54:02.520756       1 plugins.go:158] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0908 02:54:02.520766       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.\nI0908 02:54:02.522237       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.522262       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.527690       1 store.go:1428] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices\nI0908 02:54:02.528101       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:54:02.528126       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:54:02.528687       1 cacher.go:405] cacher (*apiregistration.APIService): initialized\nI0908 02:54:02.533925       1 store.go:1428] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices\nI0908 02:54:02.535653       1 cacher.go:405] cacher (*apiregistration.APIService): initialized\nI0908 02:54:02.547005       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"aggregator-proxy-cert::/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt::/srv/kubernetes/kube-apiserver/apiserver-aggregator.key\"\nI0908 02:54:04.017521       1 aggregator.go:109] Building initial OpenAPI spec\nI0908 02:54:04.577455       1 aggregator.go:112] Finished initial OpenAPI spec generation after 559.904892ms\nI0908 02:54:04.577785       1 dynamic_cafile_content.go:167] Starting request-header::/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\nI0908 02:54:04.577834       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/srv/kubernetes/ca.crt\nI0908 02:54:04.578057       1 dynamic_serving_content.go:130] Starting serving-cert::/srv/kubernetes/kube-apiserver/server.crt::/srv/kubernetes/kube-apiserver/server.key\nI0908 02:54:04.578394       1 tlsconfig.go:178] loaded client CA [0/\"client-ca-bundle::/srv/kubernetes/ca.crt,request-header::/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\"]: \"kubernetes-ca\" [] issuer=\"<self>\" (2021-09-06 02:50:16 +0000 UTC to 2031-09-06 02:50:16 +0000 UTC (now=2021-09-08 02:54:04.577982812 +0000 UTC))\nI0908 02:54:04.578429       1 tlsconfig.go:178] loaded client CA [1/\"client-ca-bundle::/srv/kubernetes/ca.crt,request-header::/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\"]: \"apiserver-aggregator-ca\" [] issuer=\"<self>\" (2021-09-06 02:50:16 +0000 UTC to 2031-09-06 02:50:16 +0000 UTC (now=2021-09-08 02:54:04.578419253 +0000 UTC))\nI0908 02:54:04.578608       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/srv/kubernetes/kube-apiserver/server.crt::/srv/kubernetes/kube-apiserver/server.key\"]: \"kubernetes-master\" [serving] validServingFor=[100.64.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,api.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io,api.internal.e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io] issuer=\"kubernetes-ca\" (2021-09-06 02:51:45 +0000 UTC to 2022-12-13 11:51:45 +0000 UTC (now=2021-09-08 02:54:04.578600567 +0000 UTC))\nI0908 02:54:04.578763       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1631069632\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1631069631\" (2021-09-08 01:53:51 +0000 UTC to 2022-09-08 01:53:51 +0000 UTC (now=2021-09-08 02:54:04.578756381 +0000 UTC))\nI0908 02:54:04.578835       1 secure_serving.go:197] Serving securely on [::]:443\nI0908 02:54:04.578863       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0908 02:54:04.581834       1 apf_controller.go:294] Starting API Priority and Fairness config controller\nI0908 02:54:04.581893       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt::/srv/kubernetes/kube-apiserver/apiserver-aggregator.key\nI0908 02:54:04.581926       1 apiservice_controller.go:97] Starting APIServiceRegistrationController\nI0908 02:54:04.581934       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller\nI0908 02:54:04.581951       1 controller.go:83] Starting OpenAPI AggregationController\nI0908 02:54:04.582013       1 customresource_discovery_controller.go:209] Starting DiscoveryController\nI0908 02:54:04.601963       1 available_controller.go:475] Starting AvailableConditionController\nI0908 02:54:04.601974       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller\nI0908 02:54:04.602003       1 autoregister_controller.go:141] Starting autoregister controller\nI0908 02:54:04.602007       1 cache.go:32] Waiting for caches to sync for autoregister controller\nI0908 02:54:04.626494       1 controller.go:86] Starting OpenAPI controller\nI0908 02:54:04.626533       1 naming_controller.go:291] Starting NamingConditionController\nI0908 02:54:04.626554       1 establishing_controller.go:76] Starting EstablishingController\nI0908 02:54:04.626580       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController\nI0908 02:54:04.626596       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController\nI0908 02:54:04.626613       1 crd_finalizer.go:266] Starting CRDFinalizer\nI0908 02:54:04.652125       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/srv/kubernetes/ca.crt\nI0908 02:54:04.652154       1 dynamic_cafile_content.go:167] Starting request-header::/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\nI0908 02:54:04.652184       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller\nI0908 02:54:04.652193       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller\nI0908 02:54:04.669759       1 crdregistration_controller.go:111] Starting crd-autoregister controller\nI0908 02:54:04.669769       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister\nE0908 02:54:04.693602       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.20.56.43, ResourceVersion: 0, AdditionalErrorMsg: \nI0908 02:54:04.739339       1 apf_controller.go:299] Running API Priority and Fairness config worker\nI0908 02:54:04.739371       1 cache.go:39] Caches are synced for AvailableConditionController controller\nI0908 02:54:04.753174       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller \nI0908 02:54:04.753516       1 cluster_authentication_trust_controller.go:165] writing updated authentication info to  kube-system configmaps/extension-apiserver-authentication\nI0908 02:54:04.754874       1 shared_informer.go:247] Caches are synced for node_authorizer \nI0908 02:54:04.759082       1 controller.go:611] quota admission added evaluator for: namespaces\nI0908 02:54:04.776167       1 shared_informer.go:247] Caches are synced for crd-autoregister \nI0908 02:54:04.783969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller\nI0908 02:54:04.784508       1 cacher.go:800] cacher (*flowcontrol.FlowSchema): 1 objects queued in incoming channel.\nI0908 02:54:04.790068       1 cacher.go:800] cacher (*flowcontrol.FlowSchema): 2 objects queued in incoming channel.\nI0908 02:54:04.787916       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,poststarthook/apiservice-registration-controller,autoregister-completion check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\n[-]poststarthook/apiservice-registration-controller failed: not finished\n[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.apiextensions.k8s.io v1beta1.authentication.k8s.io v1beta1.authorization.k8s.io v1beta1.batch v1beta1.certificates.k8s.io v1beta1.coordination.k8s.io v1beta1.discovery.k8s.io v1beta1.events.k8s.io v1beta1.extensions v1beta1.flowcontrol.apiserver.k8s.io v1beta1.networking.k8s.io v1beta1.node.k8s.io v1beta1.policy v1beta1.rbac.authorization.k8s.io v1beta1.scheduling.k8s.io v1beta1.storage.k8s.io v2beta1.autoscaling v2beta2.autoscaling]\nI0908 02:54:04.802044       1 cache.go:39] Caches are synced for autoregister controller\nI0908 02:54:04.814513       1 cacher.go:800] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.\nI0908 02:54:04.814528       1 cacher.go:800] cacher (*apiregistration.APIService): 2 objects queued in incoming channel.\nI0908 02:54:04.814712       1 cacher.go:800] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.\nI0908 02:54:04.814724       1 cacher.go:800] cacher (*apiregistration.APIService): 2 objects queued in incoming channel.\nI0908 02:54:04.814731       1 cacher.go:800] cacher (*apiregistration.APIService): 3 objects queued in incoming channel.\nI0908 02:54:04.814736       1 cacher.go:800] cacher (*apiregistration.APIService): 4 objects queued in incoming channel.\nI0908 02:54:04.832938       1 cacher.go:800] cacher (*apiregistration.APIService): 3 objects queued in incoming channel.\nI0908 02:54:04.832951       1 cacher.go:800] cacher (*apiregistration.APIService): 4 objects queued in incoming channel.\nI0908 02:54:04.894335       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:04.991921       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.092554       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.197722       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.292313       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.391783       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.492600       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.578640       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).\nI0908 02:54:05.578660       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).\nI0908 02:54:05.596410       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/cluster-admin\nI0908 02:54:05.599799       1 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\nI0908 02:54:05.604404       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:discovery\nI0908 02:54:05.608153       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000\nI0908 02:54:05.609192       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:monitoring\nI0908 02:54:05.611879       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000\nI0908 02:54:05.611895       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.\nI0908 02:54:05.612282       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:basic-user\nI0908 02:54:05.615066       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer\nI0908 02:54:05.617311       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/admin\nI0908 02:54:05.619501       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/edit\nI0908 02:54:05.621800       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/view\nI0908 02:54:05.624045       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin\nI0908 02:54:05.627949       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit\nI0908 02:54:05.630263       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view\nI0908 02:54:05.633471       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:heapster\nI0908 02:54:05.636013       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node\nI0908 02:54:05.638295       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector\nI0908 02:54:05.641548       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin\nI0908 02:54:05.644032       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper\nI0908 02:54:05.647894       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator\nI0908 02:54:05.650117       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator\nI0908 02:54:05.652391       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager\nI0908 02:54:05.655642       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-dns\nI0908 02:54:05.657892       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner\nI0908 02:54:05.660159       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient\nI0908 02:54:05.662396       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\nI0908 02:54:05.664621       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler\nI0908 02:54:05.669294       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver\nI0908 02:54:05.671523       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver\nI0908 02:54:05.677861       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver\nI0908 02:54:05.687269       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver\nI0908 02:54:05.717363       1 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\nI0908 02:54:05.717366       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:service-account-issuer-discovery\nI0908 02:54:05.743957       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node-proxier\nI0908 02:54:05.746695       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler\nI0908 02:54:05.759342       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller\nI0908 02:54:05.762325       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller\nI0908 02:54:05.764742       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller\nI0908 02:54:05.767136       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller\nI0908 02:54:05.769320       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller\nI0908 02:54:05.772479       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller\nI0908 02:54:05.774704       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller\nI0908 02:54:05.776872       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller\nI0908 02:54:05.779256       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller\nI0908 02:54:05.782942       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller\nI0908 02:54:05.785738       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller\nI0908 02:54:05.788585       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector\nI0908 02:54:05.790988       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler\nI0908 02:54:05.791874       1 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\nI0908 02:54:05.793557       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller\nI0908 02:54:05.796901       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller\nI0908 02:54:05.800861       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller\nI0908 02:54:05.803273       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder\nI0908 02:54:05.805500       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector\nI0908 02:54:05.809343       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller\nI0908 02:54:05.813718       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller\nI0908 02:54:05.816000       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller\nI0908 02:54:05.818288       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller\nI0908 02:54:05.821655       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller\nI0908 02:54:05.824088       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller\nI0908 02:54:05.826284       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller\nI0908 02:54:05.828523       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller\nI0908 02:54:05.830807       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller\nI0908 02:54:05.833626       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller\nI0908 02:54:05.835922       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller\nI0908 02:54:05.838144       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller\nI0908 02:54:05.840327       1 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher\nI0908 02:54:05.849291       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin\nI0908 02:54:05.852900       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:monitoring\nI0908 02:54:05.855015       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery\nI0908 02:54:05.857334       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user\nI0908 02:54:05.859481       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer\nI0908 02:54:05.862404       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier\nI0908 02:54:05.864553       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager\nI0908 02:54:05.866792       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns\nI0908 02:54:05.869969       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler\nI0908 02:54:05.873020       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler\nI0908 02:54:05.875276       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:node\nI0908 02:54:05.877382       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:service-account-issuer-discovery\nI0908 02:54:05.879572       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller\nI0908 02:54:05.881738       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller\nI0908 02:54:05.885100       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller\nI0908 02:54:05.887287       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller\nI0908 02:54:05.889608       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller\nI0908 02:54:05.891559       1 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\nI0908 02:54:05.892208       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller\nI0908 02:54:05.895283       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller\nI0908 02:54:05.897477       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller\nI0908 02:54:05.899707       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller\nI0908 02:54:05.901970       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller\nI0908 02:54:05.904644       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller\nI0908 02:54:05.906753       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector\nI0908 02:54:05.908860       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler\nI0908 02:54:05.911141       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller\nI0908 02:54:05.916524       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller\nI0908 02:54:05.918631       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller\nI0908 02:54:05.921034       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder\nI0908 02:54:05.923305       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector\nI0908 02:54:05.926335       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller\nI0908 02:54:05.928553       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller\nI0908 02:54:05.930777       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller\nI0908 02:54:05.938340       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller\nI0908 02:54:05.942309       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller\nI0908 02:54:05.945499       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller\nI0908 02:54:05.947749       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller\nI0908 02:54:05.950080       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller\nI0908 02:54:05.952168       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller\nI0908 02:54:05.955782       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller\nI0908 02:54:05.957988       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller\nI0908 02:54:05.960213       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller\nI0908 02:54:05.962540       1 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher\nI0908 02:54:05.965528       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI0908 02:54:05.966518       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system\nI0908 02:54:05.969629       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system\nI0908 02:54:05.972684       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system\nI0908 02:54:05.975788       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system\nI0908 02:54:05.978743       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system\nI0908 02:54:05.982710       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system\nI0908 02:54:05.985831       1 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public\nI0908 02:54:05.988170       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io\nI0908 02:54:05.989212       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system\nI0908 02:54:05.991581       1 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz\n[-]poststarthook/rbac/bootstrap-roles failed: not finished\nI0908 02:54:05.993202       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system\nI0908 02:54:05.997638       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system\nI0908 02:54:06.000763       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system\nI0908 02:54:06.003994       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system\nI0908 02:54:06.007114       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system\nI0908 02:54:06.010990       1 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public\nW0908 02:54:06.103123       1 lease.go:233] Resetting endpoints for master service \"kubernetes\" to [172.20.56.43]\nI0908 02:54:06.103806       1 controller.go:611] quota admission added evaluator for: endpoints\nI0908 02:54:06.114977       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io\nI0908 02:54:06.697878       1 controller.go:611] quota admission added evaluator for: serviceaccounts\nI0908 02:54:08.518234       1 controller.go:611] quota admission added evaluator for: daemonsets.apps\nI0908 02:54:09.272628       1 controller.go:611] quota admission added evaluator for: deployments.apps\nI0908 02:54:09.291625       1 controller.go:611] quota admission added evaluator for: poddisruptionbudgets.policy\nI0908 02:54:10.033955       1 controller.go:611] quota admission added evaluator for: limitranges\nI0908 02:54:10.971342       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io\nI0908 02:54:23.150330       1 cacher.go:800] cacher (*core.Secret): 1 objects queued in incoming channel.\nI0908 02:54:23.150345       1 cacher.go:800] cacher (*core.Secret): 2 objects queued in incoming channel.\nI0908 02:54:23.158050       1 cacher.go:800] cacher (*core.ServiceAccount): 1 objects queued in incoming channel.\nI0908 02:54:23.158062       1 cacher.go:800] cacher (*core.ServiceAccount): 2 objects queued in incoming channel.\nI0908 02:54:34.629580       1 cacher.go:800] cacher (*core.ServiceAccount): 3 objects queued in incoming channel.\nI0908 02:54:34.629595       1 cacher.go:800] cacher (*core.ServiceAccount): 4 objects queued in incoming channel.\nI0908 02:54:36.340495       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps\nI0908 02:54:36.444035       1 controller.go:611] quota admission added evaluator for: replicasets.apps\nI0908 02:54:36.456539       1 cacher.go:800] cacher (*apps.Deployment): 1 objects queued in incoming channel.\nI0908 02:54:36.456554       1 cacher.go:800] cacher (*apps.Deployment): 2 objects queued in incoming channel.\nI0908 02:54:36.496134       1 cacher.go:800] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.\nI0908 02:54:36.496149       1 cacher.go:800] cacher (*rbac.ClusterRole): 2 objects queued in incoming channel.\nI0908 02:54:36.496259       1 cacher.go:800] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.\nI0908 02:54:36.496267       1 cacher.go:800] cacher (*rbac.ClusterRole): 2 objects queued in incoming channel.\nI0908 02:54:36.895404       1 cacher.go:800] cacher (*core.Pod): 1 objects queued in incoming channel.\nI0908 02:54:36.895420       1 cacher.go:800] cacher (*core.Pod): 2 objects queued in incoming channel.\nI0908 02:54:36.895430       1 cacher.go:800] cacher (*core.Pod): 3 objects queued in incoming channel.\nI0908 02:54:36.895434       1 cacher.go:800] cacher (*core.Pod): 4 objects queued in incoming channel.\nI0908 02:54:45.106504       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:54:45.106564       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:54:45.106577       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:54:45.106780       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e5fdcb0, {CONNECTING <nil>}\nI0908 02:54:45.112157       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e5fdcb0, {READY <nil>}\nI0908 02:54:45.112869       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:54:45.502874       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:54:45.502913       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:54:45.502923       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:54:45.503097       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e5fdfd0, {CONNECTING <nil>}\nI0908 02:54:45.508328       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e5fdfd0, {READY <nil>}\nI0908 02:54:45.508953       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:55:19.871234       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:55:19.871277       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:55:19.871287       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:55:19.871492       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0085b2880, {CONNECTING <nil>}\nI0908 02:55:19.880007       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0085b2880, {READY <nil>}\nI0908 02:55:19.883113       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:55:29.692844       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:55:29.692884       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:55:29.692894       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:55:29.693065       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc008fbbaf0, {CONNECTING <nil>}\nI0908 02:55:29.698192       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc008fbbaf0, {READY <nil>}\nI0908 02:55:29.698778       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:55:54.218883       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:55:54.218922       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:55:54.218932       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:55:54.219112       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e168ea0, {CONNECTING <nil>}\nI0908 02:55:54.224241       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e168ea0, {READY <nil>}\nI0908 02:55:54.224853       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:56:02.906270       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:56:02.906306       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:56:02.906316       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:56:02.906509       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e630e40, {CONNECTING <nil>}\nI0908 02:56:02.911790       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00e630e40, {READY <nil>}\nI0908 02:56:02.912412       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:56:31.403030       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:56:31.403065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:56:31.403074       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:56:31.403255       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00f3d04a0, {CONNECTING <nil>}\nI0908 02:56:31.408426       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00f3d04a0, {READY <nil>}\nI0908 02:56:31.408906       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:56:47.767186       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:56:47.767241       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:56:47.767253       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:56:47.767395       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00fc92970, {CONNECTING <nil>}\nI0908 02:56:47.772652       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00fc92970, {READY <nil>}\nI0908 02:56:47.773197       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:57:00.690118       1 cacher.go:800] cacher (*coordination.Lease): 1 objects queued in incoming channel.\nI0908 02:57:00.690135       1 cacher.go:800] cacher (*coordination.Lease): 2 objects queued in incoming channel.\nI0908 02:57:00.690405       1 cacher.go:800] cacher (*coordination.Lease): 1 objects queued in incoming channel.\nI0908 02:57:00.690417       1 cacher.go:800] cacher (*coordination.Lease): 2 objects queued in incoming channel.\nI0908 02:57:09.799298       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:57:09.799336       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:57:09.799346       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:57:09.799544       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc009897970, {CONNECTING <nil>}\nI0908 02:57:09.804944       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc009897970, {READY <nil>}\nI0908 02:57:09.805529       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:57:27.019982       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:57:27.020023       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:57:27.020033       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:57:27.020226       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc008f3ec40, {CONNECTING <nil>}\nI0908 02:57:27.025418       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc008f3ec40, {READY <nil>}\nI0908 02:57:27.026102       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:57:40.325268       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:57:40.325307       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:57:40.325316       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:57:40.325508       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00699f350, {CONNECTING <nil>}\nI0908 02:57:40.330629       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00699f350, {READY <nil>}\nI0908 02:57:40.331285       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:58:07.964674       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:58:07.964734       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:58:07.964744       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:58:07.964947       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0091e8510, {CONNECTING <nil>}\nI0908 02:58:07.971934       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0091e8510, {READY <nil>}\nI0908 02:58:07.972726       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:58:19.718037       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:58:19.718078       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:58:19.718089       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:58:19.718283       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc009c4d250, {CONNECTING <nil>}\nI0908 02:58:19.723478       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc009c4d250, {READY <nil>}\nI0908 02:58:19.724064       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:58:41.753811       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:58:41.753870       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:58:41.753880       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:58:41.754095       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00b94b3a0, {CONNECTING <nil>}\nI0908 02:58:41.759225       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00b94b3a0, {READY <nil>}\nI0908 02:58:41.759928       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:58:57.634936       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:58:57.634977       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4001  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:58:57.634987       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:58:57.635191       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00ff2b100, {CONNECTING <nil>}\nI0908 02:58:57.640352       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00ff2b100, {READY <nil>}\nI0908 02:58:57.641004       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:59:09.656305       1 cacher.go:800] cacher (*core.Namespace): 1 objects queued in incoming channel.\nI0908 02:59:09.656323       1 cacher.go:800] cacher (*core.Namespace): 2 objects queued in incoming channel.\nI0908 02:59:10.295154       1 node_authorizer.go:203] \"NODE DENY\" err=\"node 'ip-172-20-51-126.eu-west-3.compute.internal' cannot get unknown configmap emptydir-8780/kube-root-ca.crt\"\nI0908 02:59:10.387916       1 controller.go:611] quota admission added evaluator for: cronjobs.batch\nI0908 02:59:10.664003       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io\nI0908 02:59:12.386215       1 controller.go:611] quota admission added evaluator for: statefulsets.apps\nI0908 02:59:17.905674       1 controller.go:189] Updating CRD OpenAPI spec because e2e-test-crd-publish-openapi-5077-crds.crd-publish-openapi-test-multi-to-single-ver.example.com changed\nI0908 02:59:17.954069       1 client.go:360] parsed scheme: \"passthrough\"\nI0908 02:59:17.954100       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:4002  <nil> 0 <nil>}] <nil> <nil>}\nI0908 02:59:17.954109       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0908 02:59:17.954413       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc009578ad0, {CONNECTING <nil>}\nI0908 02:59:17.967111       1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc009578ad0, {READY <nil>}\nI0908 02:59:17.969673       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0908 02:59:18.014597       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:59:18.014623       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:59:18.023537       1 store.go:1428] Monitoring e2e-test-crd-publish-openapi-5077-crds.crd-publish-openapi-test-multi-to-single-ver.example.com count at <storage-prefix>//crd-publish-openapi-test-multi-to-single-ver.example.com/e2e-test-crd-publish-openapi-5077-crds\nI0908 02:59:18.024474       1 client.go:360] parsed scheme: \"endpoint\"\nI0908 02:59:18.024503       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:4001  <nil> 0 <nil>}]\nI0908 02:59:18.031183       1 cacher.go:405] cacher (*unstructured.Unstructured): initialized\nI0908 02:59:18.032053       1 store.go:1428] Monitoring e2e-test-crd-publish-openapi-5077-crds.crd-publish-openapi-test-multi-to-single-ver.example.com count at <storage-prefix>//crd-publish-openapi-test-multi-to-single-ver.example.com/e2e-test-crd-publish-openapi-5077-crds\nI0908 02:59:18.035113       1 cacher.go:405] cacher (*unstructured.Unstructured): initialized\n==== END logs for container kube-apiserver of pod kube-system/kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal ====\n==== START logs for container healthcheck of pod kube-system/kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal ====\nI0908 02:53:30.958856       1 main.go:178] listening on :3990\n==== END logs for container healthcheck of pod kube-system/kube-apiserver-ip-172-20-56-43.eu-west-3.compute.internal ====\n==== START logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-56-43.eu-west-3.compute.internal ====\nI0908 02:54:21.687024       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0908 02:54:21.687122       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0908 02:54:21.687129       1 flags.go:59] FLAG: --allocate-node-cidrs=\"true\"\nI0908 02:54:21.687134       1 flags.go:59] FLAG: --allow-metric-labels=\"[]\"\nI0908 02:54:21.687144       1 flags.go:59] FLAG: --allow-untagged-cloud=\"false\"\nI0908 02:54:21.687148       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0908 02:54:21.687152       1 flags.go:59] FLAG: --attach-detach-reconcile-sync-period=\"1m0s\"\nI0908 02:54:21.687156       1 flags.go:59] FLAG: --authentication-kubeconfig=\"/var/lib/kube-controller-manager/kubeconfig\"\nI0908 02:54:21.687161       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0908 02:54:21.687165       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0908 02:54:21.687169       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"false\"\nI0908 02:54:21.687173       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz,/readyz,/livez]\"\nI0908 02:54:21.687181       1 flags.go:59] FLAG: --authorization-kubeconfig=\"/var/lib/kube-controller-manager/kubeconfig\"\nI0908 02:54:21.687186       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0908 02:54:21.687197       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0908 02:54:21.687201       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0908 02:54:21.687206       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0908 02:54:21.687210       1 flags.go:59] FLAG: --cidr-allocator-type=\"RangeAllocator\"\nI0908 02:54:21.687215       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0908 02:54:21.687219       1 flags.go:59] FLAG: --cloud-config=\"/etc/kubernetes/cloud.config\"\nI0908 02:54:21.687224       1 flags.go:59] FLAG: --cloud-provider=\"aws\"\nI0908 02:54:21.687228       1 flags.go:59] FLAG: --cloud-provider-gce-lb-src-cidrs=\"130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16\"\nI0908 02:54:21.687244       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0908 02:54:21.687249       1 flags.go:59] FLAG: --cluster-name=\"e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\"\nI0908 02:54:21.687254       1 flags.go:59] FLAG: --cluster-signing-cert-file=\"/srv/kubernetes/kube-controller-manager/ca.crt\"\nI0908 02:54:21.687259       1 flags.go:59] FLAG: --cluster-signing-duration=\"8760h0m0s\"\nI0908 02:54:21.687264       1 flags.go:59] FLAG: --cluster-signing-key-file=\"/srv/kubernetes/kube-controller-manager/ca.key\"\nI0908 02:54:21.687269       1 flags.go:59] FLAG: --cluster-signing-kube-apiserver-client-cert-file=\"\"\nI0908 02:54:21.687272       1 flags.go:59] FLAG: --cluster-signing-kube-apiserver-client-key-file=\"\"\nI0908 02:54:21.687276       1 flags.go:59] FLAG: --cluster-signing-kubelet-client-cert-file=\"\"\nI0908 02:54:21.687280       1 flags.go:59] FLAG: --cluster-signing-kubelet-client-key-file=\"\"\nI0908 02:54:21.687284       1 flags.go:59] FLAG: --cluster-signing-kubelet-serving-cert-file=\"\"\nI0908 02:54:21.687287       1 flags.go:59] FLAG: --cluster-signing-kubelet-serving-key-file=\"\"\nI0908 02:54:21.687291       1 flags.go:59] FLAG: --cluster-signing-legacy-unknown-cert-file=\"\"\nI0908 02:54:21.687295       1 flags.go:59] FLAG: --cluster-signing-legacy-unknown-key-file=\"\"\nI0908 02:54:21.687299       1 flags.go:59] FLAG: --concurrent-deployment-syncs=\"5\"\nI0908 02:54:21.687308       1 flags.go:59] FLAG: --concurrent-endpoint-syncs=\"5\"\nI0908 02:54:21.687313       1 flags.go:59] FLAG: --concurrent-gc-syncs=\"20\"\nI0908 02:54:21.687317       1 flags.go:59] FLAG: --concurrent-namespace-syncs=\"10\"\nI0908 02:54:21.687321       1 flags.go:59] FLAG: --concurrent-rc-syncs=\"5\"\nI0908 02:54:21.687325       1 flags.go:59] FLAG: --concurrent-replicaset-syncs=\"5\"\nI0908 02:54:21.687329       1 flags.go:59] FLAG: --concurrent-resource-quota-syncs=\"5\"\nI0908 02:54:21.687334       1 flags.go:59] FLAG: --concurrent-service-endpoint-syncs=\"5\"\nI0908 02:54:21.687338       1 flags.go:59] FLAG: --concurrent-service-syncs=\"1\"\nI0908 02:54:21.687342       1 flags.go:59] FLAG: --concurrent-serviceaccount-token-syncs=\"5\"\nI0908 02:54:21.687349       1 flags.go:59] FLAG: --concurrent-statefulset-syncs=\"5\"\nI0908 02:54:21.687353       1 flags.go:59] FLAG: --concurrent-ttl-after-finished-syncs=\"5\"\nI0908 02:54:21.687357       1 flags.go:59] FLAG: --configure-cloud-routes=\"true\"\nI0908 02:54:21.687362       1 flags.go:59] FLAG: --contention-profiling=\"false\"\nI0908 02:54:21.687367       1 flags.go:59] FLAG: --controller-start-interval=\"0s\"\nI0908 02:54:21.687372       1 flags.go:59] FLAG: --controllers=\"[*]\"\nI0908 02:54:21.687385       1 flags.go:59] FLAG: --deleting-pods-burst=\"0\"\nI0908 02:54:21.687389       1 flags.go:59] FLAG: --deleting-pods-qps=\"0.1\"\nI0908 02:54:21.687401       1 flags.go:59] FLAG: --deployment-controller-sync-period=\"30s\"\nI0908 02:54:21.687405       1 flags.go:59] FLAG: --disable-attach-detach-reconcile-sync=\"false\"\nI0908 02:54:21.687409       1 flags.go:59] FLAG: --disabled-metrics=\"[]\"\nI0908 02:54:21.687414       1 flags.go:59] FLAG: --enable-dynamic-provisioning=\"true\"\nI0908 02:54:21.687418       1 flags.go:59] FLAG: --enable-garbage-collector=\"true\"\nI0908 02:54:21.687422       1 flags.go:59] FLAG: --enable-hostpath-provisioner=\"false\"\nI0908 02:54:21.687426       1 flags.go:59] FLAG: --enable-leader-migration=\"false\"\nI0908 02:54:21.687431       1 flags.go:59] FLAG: --enable-taint-manager=\"true\"\nI0908 02:54:21.687435       1 flags.go:59] FLAG: --endpoint-updates-batch-period=\"0s\"\nI0908 02:54:21.687439       1 flags.go:59] FLAG: --endpointslice-updates-batch-period=\"0s\"\nI0908 02:54:21.687443       1 flags.go:59] FLAG: --experimental-cluster-signing-duration=\"8760h0m0s\"\nI0908 02:54:21.687448       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0908 02:54:21.687452       1 flags.go:59] FLAG: --external-cloud-volume-plugin=\"\"\nI0908 02:54:21.687456       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0908 02:54:21.687463       1 flags.go:59] FLAG: --flex-volume-plugin-dir=\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\nI0908 02:54:21.687468       1 flags.go:59] FLAG: --help=\"false\"\nI0908 02:54:21.687472       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period=\"5m0s\"\nI0908 02:54:21.687477       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-downscale-delay=\"5m0s\"\nI0908 02:54:21.687481       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-downscale-stabilization=\"5m0s\"\nI0908 02:54:21.687486       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay=\"30s\"\nI0908 02:54:21.687491       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-sync-period=\"15s\"\nI0908 02:54:21.687495       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-tolerance=\"0.1\"\nI0908 02:54:21.687501       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-upscale-delay=\"3m0s\"\nI0908 02:54:21.687505       1 flags.go:59] FLAG: --horizontal-pod-autoscaler-use-rest-clients=\"true\"\nI0908 02:54:21.687510       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0908 02:54:21.687515       1 flags.go:59] FLAG: --kube-api-burst=\"30\"\nI0908 02:54:21.687519       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0908 02:54:21.687524       1 flags.go:59] FLAG: --kube-api-qps=\"20\"\nI0908 02:54:21.687529       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-controller-manager/kubeconfig\"\nI0908 02:54:21.687534       1 flags.go:59] FLAG: --large-cluster-size-threshold=\"50\"\nI0908 02:54:21.687538       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0908 02:54:21.687543       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0908 02:54:21.687547       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0908 02:54:21.687552       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0908 02:54:21.687556       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-controller-manager\"\nI0908 02:54:21.687561       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0908 02:54:21.687566       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0908 02:54:21.687571       1 flags.go:59] FLAG: --leader-migration-config=\"\"\nI0908 02:54:21.687575       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0908 02:54:21.687594       1 flags.go:59] FLAG: --log-dir=\"\"\nI0908 02:54:21.687599       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-controller-manager.log\"\nI0908 02:54:21.687604       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0908 02:54:21.687609       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0908 02:54:21.687613       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0908 02:54:21.687617       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0908 02:54:21.687622       1 flags.go:59] FLAG: --master=\"\"\nI0908 02:54:21.687626       1 flags.go:59] FLAG: --max-endpoints-per-slice=\"100\"\nI0908 02:54:21.687630       1 flags.go:59] FLAG: --min-resync-period=\"12h0m0s\"\nI0908 02:54:21.687635       1 flags.go:59] FLAG: --mirroring-concurrent-service-endpoint-syncs=\"5\"\nI0908 02:54:21.687639       1 flags.go:59] FLAG: --mirroring-endpointslice-updates-batch-period=\"0s\"\nI0908 02:54:21.687644       1 flags.go:59] FLAG: --mirroring-max-endpoints-per-subset=\"1000\"\nI0908 02:54:21.687649       1 flags.go:59] FLAG: --namespace-sync-period=\"5m0s\"\nI0908 02:54:21.687653       1 flags.go:59] FLAG: --node-cidr-mask-size=\"0\"\nI0908 02:54:21.687657       1 flags.go:59] FLAG: --node-cidr-mask-size-ipv4=\"0\"\nI0908 02:54:21.687661       1 flags.go:59] FLAG: --node-cidr-mask-size-ipv6=\"0\"\nI0908 02:54:21.687665       1 flags.go:59] FLAG: --node-eviction-rate=\"0.1\"\nI0908 02:54:21.687670       1 flags.go:59] FLAG: --node-monitor-grace-period=\"40s\"\nI0908 02:54:21.687674       1 flags.go:59] FLAG: --node-monitor-period=\"5s\"\nI0908 02:54:21.687678       1 flags.go:59] FLAG: --node-startup-grace-period=\"1m0s\"\nI0908 02:54:21.687683       1 flags.go:59] FLAG: --node-sync-period=\"0s\"\nI0908 02:54:21.687687       1 flags.go:59] FLAG: --one-output=\"false\"\nI0908 02:54:21.687691       1 flags.go:59] FLAG: --permit-address-sharing=\"false\"\nI0908 02:54:21.687695       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0908 02:54:21.687699       1 flags.go:59] FLAG: --pod-eviction-timeout=\"5m0s\"\nI0908 02:54:21.687703       1 flags.go:59] FLAG: --port=\"10252\"\nI0908 02:54:21.687708       1 flags.go:59] FLAG: --profiling=\"true\"\nI0908 02:54:21.687712       1 flags.go:59] FLAG: --pv-recycler-increment-timeout-nfs=\"30\"\nI0908 02:54:21.687716       1 flags.go:59] FLAG: --pv-recycler-minimum-timeout-hostpath=\"60\"\nI0908 02:54:21.687720       1 flags.go:59] FLAG: --pv-recycler-minimum-timeout-nfs=\"300\"\nI0908 02:54:21.687725       1 flags.go:59] FLAG: --pv-recycler-pod-template-filepath-hostpath=\"\"\nI0908 02:54:21.687729       1 flags.go:59] FLAG: --pv-recycler-pod-template-filepath-nfs=\"\"\nI0908 02:54:21.687733       1 flags.go:59] FLAG: --pv-recycler-timeout-increment-hostpath=\"30\"\nI0908 02:54:21.687737       1 flags.go:59] FLAG: --pvclaimbinder-sync-period=\"15s\"\nI0908 02:54:21.687742       1 flags.go:59] FLAG: --register-retry-count=\"10\"\nI0908 02:54:21.687746       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0908 02:54:21.687758       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0908 02:54:21.687762       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0908 02:54:21.687779       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0908 02:54:21.687791       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0908 02:54:21.687797       1 flags.go:59] FLAG: --resource-quota-sync-period=\"5m0s\"\nI0908 02:54:21.687802       1 flags.go:59] FLAG: --root-ca-file=\"/srv/kubernetes/ca.crt\"\nI0908 02:54:21.687807       1 flags.go:59] FLAG: --route-reconciliation-period=\"10s\"\nI0908 02:54:21.687812       1 flags.go:59] FLAG: --secondary-node-eviction-rate=\"0.01\"\nI0908 02:54:21.687833       1 flags.go:59] FLAG: --secure-port=\"10257\"\nI0908 02:54:21.687839       1 flags.go:59] FLAG: --service-account-private-key-file=\"/srv/kubernetes/kube-controller-manager/service-account.key\"\nI0908 02:54:21.687845       1 flags.go:59] FLAG: --service-cluster-ip-range=\"\"\nI0908 02:54:21.687850       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0908 02:54:21.687861       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0908 02:54:21.687867       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0908 02:54:21.687871       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0908 02:54:21.687876       1 flags.go:59] FLAG: --terminated-pod-gc-threshold=\"12500\"\nI0908 02:54:21.687881       1 flags.go:59] FLAG: --tls-cert-file=\"/srv/kubernetes/kube-controller-manager/server.crt\"\nI0908 02:54:21.687887       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0908 02:54:21.687898       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0908 02:54:21.687902       1 flags.go:59] FLAG: --tls-private-key-file=\"/srv/kubernetes/kube-controller-manager/server.key\"\nI0908 02:54:21.687907       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0908 02:54:21.687917       1 flags.go:59] FLAG: --unhealthy-zone-threshold=\"0.55\"\nI0908 02:54:21.687923       1 flags.go:59] FLAG: --use-service-account-credentials=\"true\"\nI0908 02:54:21.687927       1 flags.go:59] FLAG: --v=\"2\"\nI0908 02:54:21.687931       1 flags.go:59] FLAG: --version=\"false\"\nI0908 02:54:21.687939       1 flags.go:59] FLAG: --vmodule=\"\"\nI0908 02:54:21.687944       1 flags.go:59] FLAG: --volume-host-allow-local-loopback=\"true\"\nI0908 02:54:21.687948       1 flags.go:59] FLAG: --volume-host-cidr-denylist=\"[]\"\nI0908 02:54:21.690332       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"serving-cert::/srv/kubernetes/kube-controller-manager/server.crt::/srv/kubernetes/kube-controller-manager/server.key\"\nI0908 02:54:22.161026       1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController\nI0908 02:54:22.166609       1 controllermanager.go:175] Version: v1.21.4\nI0908 02:54:22.182211       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController\nI0908 02:54:22.183271       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/srv/kubernetes/kube-controller-manager/server.crt::/srv/kubernetes/kube-controller-manager/server.key\"]: \"kube-controller-manager\" [serving] validServingFor=[kube-controller-manager.kube-system.svc.cluster.local] issuer=\"kubernetes-ca\" (2021-09-06 02:51:46 +0000 UTC to 2022-12-13 11:51:46 +0000 UTC (now=2021-09-08 02:54:22.182505645 +0000 UTC))\nI0908 02:54:22.188058       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1631069662\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1631069661\" (2021-09-08 01:54:21 +0000 UTC to 2022-09-08 01:54:21 +0000 UTC (now=2021-09-08 02:54:22.188041552 +0000 UTC))\nI0908 02:54:22.188852       1 secure_serving.go:197] Serving securely on [::]:10257\nI0908 02:54:22.192662       1 dynamic_serving_content.go:130] Starting serving-cert::/srv/kubernetes/kube-controller-manager/server.crt::/srv/kubernetes/kube-controller-manager/server.key\nI0908 02:54:22.192710       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0908 02:54:22.193535       1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252\nI0908 02:54:22.195291       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0908 02:54:22.195303       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0908 02:54:22.195930       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController\nI0908 02:54:22.199195       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...\nI0908 02:54:22.199322       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0908 02:54:22.199331       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0908 02:54:22.214974       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager\nI0908 02:54:22.216334       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-172-20-56-43.eu-west-3.compute.internal_3642bc9d-b3b3-4e28-b708-a58670d9c57a became leader\"\nI0908 02:54:22.296970       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController \nI0908 02:54:22.300201       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI0908 02:54:22.300201       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file \nI0908 02:54:22.300502       1 tlsconfig.go:178] loaded client CA [0/\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\"]: \"kubernetes-ca\" [] issuer=\"<self>\" (2021-09-06 02:50:16 +0000 UTC to 2031-09-06 02:50:16 +0000 UTC (now=2021-09-08 02:54:22.300490511 +0000 UTC))\nI0908 02:54:22.300519       1 tlsconfig.go:178] loaded client CA [1/\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\"]: \"apiserver-aggregator-ca\" [] issuer=\"<self>\" (2021-09-06 02:50:16 +0000 UTC to 2031-09-06 02:50:16 +0000 UTC (now=2021-09-08 02:54:22.300514053 +0000 UTC))\nI0908 02:54:22.300674       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/srv/kubernetes/kube-controller-manager/server.crt::/srv/kubernetes/kube-controller-manager/server.key\"]: \"kube-controller-manager\" [serving] validServingFor=[kube-controller-manager.kube-system.svc.cluster.local] issuer=\"kubernetes-ca\" (2021-09-06 02:51:46 +0000 UTC to 2022-12-13 11:51:46 +0000 UTC (now=2021-09-08 02:54:22.300667635 +0000 UTC))\nI0908 02:54:22.300843       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1631069662\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1631069661\" (2021-09-08 01:54:21 +0000 UTC to 2022-09-08 01:54:21 +0000 UTC (now=2021-09-08 02:54:22.300816943 +0000 UTC))\nW0908 02:54:22.771125       1 plugins.go:105] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release\nI0908 02:54:22.797218       1 aws.go:1261] Building AWS cloudprovider\nI0908 02:54:22.797301       1 aws.go:1221] Zone not specified in configuration file; querying AWS metadata service\nI0908 02:54:23.023290       1 tags.go:79] AWS cloud filtering on ClusterID: e2e-d8d340a84c-b7d46.test-cncf-aws.k8s.io\nI0908 02:54:23.023341       1 aws.go:812] Setting up informers for Cloud\nI0908 02:54:23.029146       1 shared_informer.go:240] Waiting for caches to sync for tokens\nI0908 02:54:23.040742       1 controllermanager.go:559] Starting \"statefulset\"\nI0908 02:54:23.049345       1 controllermanager.go:574] Started \"statefulset\"\nI0908 02:54:23.049361       1 controllermanager.go:559] Starting \"ttl\"\nI0908 02:54:23.050071       1 stateful_set.go:146] Starting stateful set controller\nI0908 02:54:23.050083       1 shared_informer.go:240] Waiting for caches to sync for stateful set\nI0908 02:54:23.053923       1 controllermanager.go:574] Started \"ttl\"\nI0908 02:54:23.053937       1 controllermanager.go:559] Starting \"pvc-protection\"\nI0908 02:54:23.058040       1 ttl_controller.go:121] Starting TTL controller\nI0908 02:54:23.058054       1 shared_informer.go:240] Waiting for caches to sync for TTL\nI0908 02:54:23.058069       1 shared_informer.go:247] Caches are synced for TTL \nI0908 02:54:23.060053       1 controllermanager.go:574] Started \"pvc-protection\"\nI0908 02:54:23.060083       1 controllermanager.go:559] Starting \"replicaset\"\nI0908 02:54:23.060210       1 pvc_protection_controller.go:110] \"Starting PVC protection controller\"\nI0908 02:54:23.060222       1 shared_informer.go:240] Waiting for caches to sync for PVC protection\nI0908 02:54:23.065378       1 controllermanager.go:574] Started \"replicaset\"\nI0908 02:54:23.065395       1 controllermanager.go:559] Starting \"serviceaccount\"\nI0908 02:54:23.065511       1 replica_set.go:182] Starting replicaset controller\nI0908 02:54:23.065523       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet\nI0908 02:54:23.070438       1 controllermanager.go:574] Started \"serviceaccount\"\nI0908 02:54:23.070450       1 controllermanager.go:559] Starting \"csrsigning\"\nI0908 02:54:23.071130       1 serviceaccounts_controller.go:117] Starting service account controller\nI0908 02:54:23.071146       1 shared_informer.go:240] Waiting for caches to sync for service account\nI0908 02:54:23.075701       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\"\nI0908 02:54:23.076122       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\"\nI0908 02:54:23.076438       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\"\nI0908 02:54:23.076711       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\"\nI0908 02:54:23.076914       1 controllermanager.go:574] Started \"csrsigning\"\nI0908 02:54:23.076929       1 controllermanager.go:559] Starting \"persistentvolume-binder\"\nI0908 02:54:23.078605       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kubelet-serving\"\nI0908 02:54:23.078618       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving\nI0908 02:54:23.078653       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kubelet-client\"\nI0908 02:54:23.078659       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client\nI0908 02:54:23.078673       1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\nI0908 02:54:23.078701       1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\nI0908 02:54:23.079175       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kube-apiserver-client\"\nI0908 02:54:23.079186       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client\nI0908 02:54:23.079222       1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\nI0908 02:54:23.079493       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-legacy-unknown\"\nI0908 02:54:23.079503       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown\nI0908 02:54:23.079520       1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/kube-controller-manager/ca.crt::/srv/kubernetes/kube-controller-manager/ca.key\nI0908 02:54:23.088795       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/host-path\"\nI0908 02:54:23.089342       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/nfs\"\nI0908 02:54:23.089983       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/glusterfs\"\nI0908 02:54:23.090655       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/rbd\"\nI0908 02:54:23.090669       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/quobyte\"\nI0908 02:54:23.090677       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/aws-ebs\"\nI0908 02:54:23.090685       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/gce-pd\"\nI0908 02:54:23.090694       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/cinder\"\nI0908 02:54:23.090710       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/azure-disk\"\nI0908 02:54:23.090718       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/vsphere-volume\"\nI0908 02:54:23.090726       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/azure-file\"\nI0908 02:54:23.091379       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/flocker\"\nI0908 02:54:23.091927       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/portworx-volume\"\nI0908 02:54:23.093239       1 ttl_controller.go:276] \"Changed ttl annotation\" node=\"ip-172-20-56-43.eu-west-3.compute.internal\" new_ttl=\"0s\"\nI0908 02:54:23.093420       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/scaleio\"\nI0908 02:54:23.093441       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/local-volume\"\nI0908 02:54:23.093454       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/storageos\"\nI0908 02:54:23.094083       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/csi\"\nI0908 02:54:23.094168       1 controllermanager.go:574] Started \"persistentvolume-binder\"\nI0908 02:54:23.094180       1 controllermanager.go:559] Starting \"attachdetach\"\nI0908 02:54:23.094298       1 pv_controller_base.go:308] Starting persistent volume controller\nI0908 02:54:23.094313       1 shared_informer.go:240] Waiting for caches to sync for persistent volume\nI0908 02:54:23.101975       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/azure-disk\"\nI0908 02:54:23.101992       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/vsphere-volume\"\nI0908 02:54:23.102000       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/aws-ebs\"\nI0908 02:54:23.102008       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/gce-pd\"\nI0908 02:54:23.102017       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/cinder\"\nI0908 02:54:23.102034       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/portworx-volume\"\nI0908 02:54:23.102044       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/scaleio\"\nI0908 02:54:23.102052       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/storageos\"\nI0908 02:54:23.102068       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/fc\"\nI0908 02:54:23.102718       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/iscsi\"\nI0908 02:54:23.102732       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/rbd\"\nI0908 02:54:23.103433       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/csi\"\nI0908 02:54:23.103527       1 controllermanager.go:574] Started \"attachdetach\"\nI0908 02:54:23.103548       1 controllermanager.go:559] Starting \"pv-protection\"\nW0908 02:54:23.104149       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"ip-172-20-56-43.eu-west-3.compute.internal\" does not exist\nI0908 02:54:23.104941       1 attach_detach_controller.go:328] Starting attach detach controller\nI0908 02:54:23.104954       1 shared_informer.go:240] Waiting for caches to sync for attach detach\nI0908 02:54:23.130183       1 shared_informer.go:247] Caches are synced for tokens \nI0908 02:54:23.134277       1 controllermanager.go:574] Started \"pv-protection\"\nI0908 02:54:23.134291       1 controllermanager.go:559] Starting \"resourcequota\"\nI0908 02:54:23.134320       1 pv_protection_controller.go:83] Starting PV protection controller\nI0908 02:54:23.134325       1 shared_informer.go:240] Waiting for caches to sync for PV protection\nI0908 02:54:23.450601       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps\nI0908 02:54:23.451678       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps\nI0908 02:54:23.451714       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch\nI0908 02:54:23.452353       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io\nI0908 02:54:23.452385       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io\nI0908 02:54:23.452431       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges\nI0908 02:54:23.452465       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps\nI0908 02:54:23.452509       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io\nI0908 02:54:23.452556       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io\nI0908 02:54:23.452609       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling\nI0908 02:54:23.452635       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy\nI0908 02:54:23.452665       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io\nI0908 02:54:23.452724       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts\nI0908 02:54:23.452741       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints\nI0908 02:54:23.452787       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io\nI0908 02:54:23.452811       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io\nI0908 02:54:23.452847       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps\nI0908 02:54:23.452862       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps\nI0908 02:54:23.452887       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch\nI0908 02:54:23.452926       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions\nI0908 02:54:23.452967       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io\nI0908 02:54:23.452998       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates\nI0908 02:54:23.453009       1 controllermanager.go:574] Started \"resourcequota\"\nI0908 02:54:23.453017       1 controllermanager.go:559] Starting \"job\"\nI0908 02:54:23.453230       1 resource_quota_controller.go:273] Starting resource quota controller\nI0908 02:54:23.453242       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0908 02:54:23.453262       1 resource_quota_monitor.go:304] QuotaMonitor running\nI0908 02:54:23.463239       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events extensions/v1beta1, Resource=ingresses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []\nI0908 02:54:23.582220       1 controllermanager.go:574] Started \"job\"\nI0908 02:54:23.582240       1 controllermanager.go:559] Starting \"csrapproving\"\nI0908 02:54:23.582277       1 job_controller.go:150] Starting job controller\nI0908 02:54:23.582287       1 shared_informer.go:240] Waiting for caches to sync for job\nI0908 02:54:23.632845       1 controllermanager.go:574] Started \"csrapproving\"\nI0908 02:54:23.632865       1 controllermanager.go:559] Starting \"endpointslicemirroring\"\nI0908 02:54:23.632911       1 certificate_controller.go:118] Starting certificate controller \"csrapproving\"\nI0908 02:54:23.632920       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving\nI0908 02:54:23.783417       1 controllermanager.go:574] Started \"endpointslicemirroring\"\nI0908 02:54:23.783440       1 controllermanager.go:559] Starting \"horizontalpodautoscaling\"\nI0908 02:54:23.783480       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller\nI0908 02:54:23.783486       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring\nI0908 02:54:24.083595       1 controllermanager.go:574] Started \"horizontalpodautoscaling\"\nI0908 02:54:24.083614       1 controllermanager.go:559] Starting \"disruption\"\nI0908 02:54:24.083661       1 horizontal.go:169] Starting HPA controller\nI0908 02:54:24.083667       1 shared_informer.go:240] Waiting for caches to sync for HPA\nI0908 02:54:24.282976       1 controllermanager.go:574] Started \"disruption\"\nI0908 02:54:24.282995       1 controllermanager.go:559] Starting \"endpointslice\"\nI0908 02:54:24.284707       1 disruption.go:363] Starting disruption controller\nI0908 02:54:24.284730       1 shared_informer.go:240] Waiting for caches to sync for disruption\nI0908 02:54:24.433504       1 controllermanager.go:574] Started \"endpointslice\"\nI0908 02:54:24.433525       1 controllermanager.go:559] Starting \"nodeipam\"\nI0908 02:54:24.434655       1 endpointslice_controller.go:256] Starting endpoint slice controller\nI0908 02:54:24.434669       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice\nI0908 02:54:24.583766       1 node_ipam_controller.go:91] Sending events to api server.\nI0908 02:54:34.602623       1 range_allocator.go:82] Sending events to api server.\nI0908 02:54:34.603470       1 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.\nI0908 02:54:34.603483       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.\nI0908 02:54:34.603519       1 controllermanager.go:574] Started \"nodeipam\"\nI0908 02:54:34.603528       1 controllermanager.go:559] Starting \"cloud-node-lifecycle\"\nI0908 02:54:34.603631       1 node_ipam_controller.go:154] Starting ipam controller\nI0908 02:54:34.603641       1 shared_informer.go:240] Waiting for caches to sync for node\nI0908 02:54:34.603646       1 shared_informer.go:247] Caches are synced for node \nI0908 02:54:34.603656       1 range_allocator.go:172] Starting range CIDR allocator\nI0908 02:54:34.603661       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator\nI0908 02:54:34.603665       1 shared_informer.go:247] Caches are synced for cidrallocator \nI0908 02:54:34.605139       1 node_lifecycle_controller.go:76] Sending events to api server\nI0908 02:54:34.605177       1 controllermanager.go:574] Started \"cloud-node-lifecycle\"\nI0908 02:54:34.605187       1 controllermanager.go:559] Starting \"root-ca-cert-publisher\"\nI0908 02:54:34.609237       1 range_allocator.go:373] Set node ip-172-20-56-43.eu-west-3.compute.internal PodCIDR to [100.96.0.0/24]\nI0908 02:54:34.613179       1 controllermanager.go:574] Started \"root-ca-cert-publisher\"\nI0908 02:54:34.613192       1 controllermanager.go:559] Starting \"replicationcontroller\"\nI0908 02:54:34.613279       1 publisher.go:102] Starting root CA certificate configmap publisher\nI0908 02:54:34.613286       1 shared_informer.go:240] Waiting for caches to sync for crt configmap\nI0908 02:54:34.618696       1 controllermanager.go:574] Started \"replicationcontroller\"\nI0908 02:54:34.618711       1 controllermanager.go:559] Starting \"daemonset\"\nI0908 02:54:34.618806       1 replica_set.go:182] Starting replicationcontroller controller\nI0908 02:54:34.618813       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController\nI0908 02:54:34.630911       1 controllermanager.go:574] Started \"daemonset\"\nW0908 02:54:34.630924       1 controllermanager.go:553] \"bootstrapsigner\" is disabled\nW0908 02:54:34.630929       1 controllermanager.go:553] \"tokencleaner\" is disabled\nI0908 02:54:34.630935       1 controllermanager.go:559] Starting \"persistentvolume-expander\"\nI0908 02:54:34.631047       1 daemon_controller.go:285] Starting daemon sets controller\nI0908 02:54:34.631053       1 shared_informer.go:240] Waiting for caches to sync for daemon sets\nI0908 02:54:34.638134       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/aws-ebs\"\nI0908 02:54:34.638152       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/gce-pd\"\nI0908 02:54:34.638163       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/cinder\"\nI0908 02:54:34.638171       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/azure-disk\"\nI0908 02:54:34.638180       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/vsphere-volume\"\nI0908 02:54:34.638189       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/azure-file\"\nI0908 02:54:34.638234       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/portworx-volume\"\nI0908 02:54:34.638244       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/glusterfs\"\nI0908 02:54:34.638255       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/rbd\"\nI0908 02:54:34.638264       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/scaleio\"\nI0908 02:54:34.638273       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/storageos\"\nI0908 02:54:34.638281       1 plugins.go:639] Loaded volume plugin \"kubernetes.io/fc\"\nI0908 02:54:34.638383       1 controllermanager.go:574] Started \"persistentvolume-expander\"\nI0908 02:54:34.638397       1 controllermanager.go:559] Starting \"clusterrole-aggregation\"\nI0908 02:54:34.638510       1 expand_controller.go:327] Starting expand controller\nI0908 02:54:34.638520       1 shared_informer.go:240] Waiting for caches to sync for expand\nI0908 02:54:34.644615       1 controllermanager.go:574] Started \"clusterrole-aggregation\"\nI0908 02:54:34.644629       1 controllermanager.go:559] Starting \"ephemeral-volume\"\nI0908 02:54:34.644724       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator\nI0908 02:54:34.644730       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator\nI0908 02:54:34.662905       1 controllermanager.go:574] Started \"ephemeral-volume\"\nI0908 02:54:34.662921       1 controllermanager.go:559] Starting \"podgc\"\nI0908 02:54:34.663016       1 controller.go:170] Starting ephemeral volume controller\nI0908 02:54:34.663027       1 shared_informer.go:240] Waiting for caches to sync for ephemeral\nI0908 02:54:34.677752       1 controllermanager.go:574] Started \"podgc\"\nI0908 02:54:34.677769       1 controllermanager.go:559] Starting \"namespace\"\nI0908 02:54:34.677885       1 gc_controller.go:89] Starting GC controller\nI0908 02:54:34.677893       1 shared_informer.go:240] Waiting for caches to sync for GC\nI0908 02:54:34.708829       1 controllermanager.go:574] Started \"namespace\"\nI0908 02:54:34.708843       1 controllermanager.go:559] Starting \"csrcleaner\"\nI0908 02:54:34.708908       1 namespace_controller.go:200] Starting namespace controller\nI0908 02:54:34.708915       1 shared_informer.go:240] Waiting for caches to sync for namespace\nI0908 02:54:34.717729       1 controllermanager.go:574] Started \"csrcleaner\"\nI0908 02:54:34.717743       1 controllermanager.go:559] Starting \"nodelifecycle\"\nI0908 02:54:34.717849       1 cleaner.go:82] Starting CSR cleaner controller\nI0908 02:54:34.722100       1 node_lifecycle_controller.go:377] Sending events to api server.\nI0908 02:54:34.723137       1 taint_manager.go:163] \"Sending events to api server\"\nI0908 02:54:34.723256       1 node_lifecycle_controller.go:505] Controller will reconcile labels.\nI0908 02:54:34.723315       1 controllermanager.go:574] Started \"nodelifecycle\"\nI0908 02:54:34.723322       1 controllermanager.go:559] Starting \"service\"\nI0908 02:54:34.723437       1 node_lifecycle_controller.go:539] Starting node controller\nI0908 02:54:34.723442       1 shared_informer.go:240] Waiting for caches to sync for taint\nI0908 02:54:34.791615       1 controllermanager.go:574] Started \"service\"\nI0908 02:54:34.791636       1 controllermanager.go:559] Starting \"ttl-after-finished\"\nI0908 02:54:34.791690       1 controller.go:230] Starting service controller\nI0908 02:54:34.791698       1 shared_informer.go:240] Waiting for caches to sync for service\nI0908 02:54:34.938765       1 controllermanager.go:574] Started \"ttl-after-finished\"\nI0908 02:54:34.938786       1 controllermanager.go:559] Starting \"endpoint\"\nI0908 02:54:34.938836       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller\nI0908 02:54:34.938843       1 shared_informer.go:240] Waiting for caches to sync for TTL after finished\nI0908 02:54:35.088061       1 controllermanager.go:574] Started \"endpoint\"\nI0908 02:54:35.088082       1 controllermanager.go:559] Starting \"deployment\"\nI0908 02:54:35.088124       1 endpoints_controller.go:189] Starting endpoint controller\nI0908 02:54:35.088131       1 shared_informer.go:240] Waiting for caches to sync for endpoint\nI0908 02:54:35.238528       1 controllermanager.go:574] Started \"deployment\"\nI0908 02:54:35.238551       1 controllermanager.go:559] Starting \"cronjob\"\nI0908 02:54:35.238600       1 deployment_controller.go:153] \"Starting controller\" controller=\"deployment\"\nI0908 02:54:35.238608       1 shared_informer.go:240] Waiting for caches to sync for deployment\nI0908 02:54:35.389049       1 controllermanager.go:574] Started \"cronjob\"\nI0908 02:54:35.389069       1 controllermanager.go:559] Starting \"route\"\nI0908 02:54:35.389104       1 cronjob_controllerv2.go:125] Starting cronjob controller v2\nI0908 02:54:35.389110       1 shared_informer.go:240] Waiting for caches to sync for cronjob\nI0908 02:54:35.539019       1 controllermanager.go:574] Started \"route\"\nI0908 02:54:35.539038       1 controllermanager.go:559] Starting \"garbagecollector\"\nI0908 02:54:35.539070       1 route_controller.go:100] Starting route controller\nI0908 02:54:35.539077       1 shared_informer.go:240] Waiting for caches to sync for route\nI0908 02:54:35.539083       1 shared_informer.go:247] Caches are synced for route \nI0908 02:54:35.576712       1 route_controller.go:193] Creating route for node ip-172-20-56-43.eu-west-3.compute.internal 100.96.0.0/24 with hint ca6d0760-5ed3-4072-adb7-df925f431870, throttled 243ns\nI0908 02:54:35.788148       1 garbagecollector.go:142] Starting garbage collector controller\nI0908 02:54:35.788167       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0908 02:54:35.788180       1 controllermanager.go:574] Started \"garbagecollector\"\nI0908 02:54:35.788200       1 graph_builder.go:289] GraphBuilder running\nI0908 02:54:35.788408       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0908 02:54:35.831727       1 shared_informer.go:247] Caches are synced for daemon sets \nI0908 02:54:35.834934       1 shared_informer.go:247] Caches are synced for endpoint_slice \nI0908 02:54:35.835052       1 shared_informer.go:247] Caches are synced for PV protection \nI0908 02:54:35.839564       1 shared_informer.go:247] Caches are synced for TTL after finished \nI0908 02:54:35.839599       1 shared_informer.go:247] Caches are synced for deployment \nI0908 02:54:35.845308       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator \nI0908 02:54:35.850438       1 shared_informer.go:247] Caches are synced for stateful set \nI0908 02:54:35.860728       1 shared_informer.go:247] Caches are synced for PVC protection \nI0908 02:54:35.863876       1 shared_informer.go:247] Caches are synced for ephemeral \nI0908 02:54:35.866023       1 shared_informer.go:247] Caches are synced for ReplicaSet \nI0908 02:54:35.872181       1 shared_informer.go:247] Caches are synced for service account \nI0908 02:54:35.878401       1 shared_informer.go:247] Caches are synced for GC \nI0908 02:54:35.882632       1 shared_informer.go:247] Caches are synced for job \nI0908 02:54:35.883734       1 shared_informer.go:247] Caches are synced for HPA \nI0908 02:54:35.883739       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring \nI0908 02:54:35.883751       1 endpointslicemirroring_controller.go:218] Starting 5 worker threads\nI0908 02:54:35.884880       1 shared_informer.go:247] Caches are synced for disruption \nI0908 02:54:35.884890       1 disruption.go:371] Sending events to api server.\nI0908 02:54:35.888581       1 shared_informer.go:247] Caches are synced for endpoint \nI0908 02:54:35.889767       1 shared_informer.go:247] Caches are synced for cronjob \nI0908 02:54:35.892627       1 shared_informer.go:247] Caches are synced for service \nI0908 02:54:35.894816       1 shared_informer.go:247] Caches are synced for persistent volume \nI0908 02:54:35.905969       1 shared_informer.go:247] Caches are synced for attach detach \nI0908 02:54:35.909097       1 shared_informer.go:247] Caches are synced for namespace \nI0908 02:54:35.919393       1 shared_informer.go:247] Caches are synced for ReplicationController \nI0908 02:54:35.933627       1 shared_informer.go:247] Caches are synced for certificate-csrapproving \nI0908 02:54:35.939227       1 shared_informer.go:247] Caches are synced for expand \nI0908 02:54:35.979158       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client \nI0908 02:54:35.979162       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving \nI0908 02:54:35.979201       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client \nI0908 02:54:35.980239       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown \nI0908 02:54:36.001166       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events extensions/v1beta1, Resource=ingresses flowcontrol.apiserver.k8s.io/v1beta1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta1, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []\nI0908 02:54:36.013451       1 shared_informer.go:247] Caches are synced for crt configmap \nI0908 02:54:36.013541       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0908 02:54:36.053810       1 shared_informer.go:247] Caches are synced for resource quota \nI0908 02:54:36.089300       1 shared_informer.go:247] Caches are synced for resource quota \nI0908 02:54:36.089312       1 resource_quota_controller.go:454] synced quota controller\nI0908 02:54:36.124429       1 shared_informer.go:247] Caches are synced for taint \nI0908 02:54:36.124488       1 node_lifecycle_controller.go:770] Controller observed a new Node: \"ip-172-20-56-43.eu-west-3.compute.internal\"\nI0908 02:54:36.124508       1 controller_utils.go:172] Recording Registered Node ip-172-20-56-43.eu-west-3.compute.internal in Controller event message for node ip-172-20-56-43.eu-west-3.compute.internal\nI0908 02:54:36.124535       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: eu-west-3:\x00:eu-west-3a\nW0908 02:54:36.124591       1 node_lifecycle_controller.go:1013] Missing timestamp for Node ip-172-20-56-43.eu-west-3.compute.internal. Assuming now as a timestamp.\nI0908 02:54:36.124847       1 taint_manager.go:187] \"Starting NoExecuteTaintManager\"\nI0908 02:54:36.125273       1 event.go:291] \"Event occurred\" object=\"ip-172-20-56-43.eu-west-3.compute.internal\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node ip-172-20-56-43.eu-west-3.compute.internal event: Registered Node ip-172-20-56-43.eu-west-3.compute.internal in Controller\"\nI0908 02:54:36.124630       1 node_lifecycle_controller.go:1214] Controller detected that zone eu-west-3:\x00:eu-west-3a is now in state Normal.\nI0908 02:54:36.160857       1 route_controller.go:213] Created route for node ip-172-20-56-43.eu-west-3.compute.internal 100.96.0.0/24 with hint ca6d0760-5ed3-4072-adb7-df925f431870 after 584.143672ms\nI0908 02:54:36.160894       1 route_controller.go:303] Patching node status ip-172-20-56-43.eu-west-3.compute.internal with true previous condition was:nil\nI0908 02:54:36.353069       1 event.go:291] \"Event occurred\" object=\"kube-system/kops-controller\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\&#