This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-08-04 22:59
Elapsed29m23s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 126 lines ...
I0804 23:00:33.648529    4108 up.go:43] Cleaning up any leaked resources from previous cluster
I0804 23:00:33.648558    4108 dumplogs.go:38] /logs/artifacts/9011e63b-f577-11eb-9d79-fe40b1711f5a/kops toolbox dump --name e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user admin
I0804 23:00:33.663660    4128 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0804 23:00:33.663752    4128 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-489af555f9-bbb74.test-cncf-aws.k8s.io" not found
W0804 23:00:34.154123    4108 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0804 23:00:34.154161    4108 down.go:48] /logs/artifacts/9011e63b-f577-11eb-9d79-fe40b1711f5a/kops delete cluster --name e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --yes
I0804 23:00:34.170558    4139 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0804 23:00:34.170632    4139 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-489af555f9-bbb74.test-cncf-aws.k8s.io" not found
I0804 23:00:34.724947    4108 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/08/04 23:00:34 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0804 23:00:34.732974    4108 http.go:37] curl https://ip.jsb.workers.dev
I0804 23:00:34.830222    4108 up.go:144] /logs/artifacts/9011e63b-f577-11eb-9d79-fe40b1711f5a/kops create cluster --name e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.3 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=379101102735/debian-stretch-hvm-x86_64-gp2-2021-07-21-65742 --channel=alpha --networking=kubenet --container-runtime=docker --admin-access 35.225.226.150/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0804 23:00:34.847854    4149 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0804 23:00:34.848227    4149 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0804 23:00:34.901543    4149 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0804 23:00:35.448303    4149 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0804 23:01:00.817460    4108 up.go:181] /logs/artifacts/9011e63b-f577-11eb-9d79-fe40b1711f5a/kops validate cluster --name e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0804 23:01:00.833046    4170 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0804 23:01:00.833119    4170 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-489af555f9-bbb74.test-cncf-aws.k8s.io

W0804 23:01:02.076783    4170 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:01:12.107984    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:01:22.156527    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:01:32.192011    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:01:42.225980    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:01:52.263118    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:02:02.307210    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
W0804 23:02:12.340572    4170 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:02:22.443940    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:02:32.470916    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
W0804 23:02:42.505606    4170 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:02:52.540871    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:03:02.569202    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:03:12.602120    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:03:22.630846    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:03:32.690531    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:03:42.946234    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:03:52.976499    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:04:03.021726    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:04:13.056469    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:04:23.104184    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0804 23:04:33.145949    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...
Machine	i-04f67f418ea3cf920				machine "i-04f67f418ea3cf920" has not yet joined cluster
Machine	i-0999b05a81ef71ddf				machine "i-0999b05a81ef71ddf" has not yet joined cluster
Machine	i-0a9546c770069db7c				machine "i-0a9546c770069db7c" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-7bjbh		system-cluster-critical pod "coredns-5dc785954d-7bjbh" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-2g6cx	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-2g6cx" is pending

Validation Failed
W0804 23:04:45.873414    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...
Machine	i-04f67f418ea3cf920				machine "i-04f67f418ea3cf920" has not yet joined cluster
Machine	i-0999b05a81ef71ddf				machine "i-0999b05a81ef71ddf" has not yet joined cluster
Machine	i-0a9546c770069db7c				machine "i-0a9546c770069db7c" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-7bjbh		system-cluster-critical pod "coredns-5dc785954d-7bjbh" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-2g6cx	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-2g6cx" is pending

Validation Failed
W0804 23:04:57.608325    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 12 lines ...
Node	ip-172-20-61-222.eu-west-2.compute.internal				node "ip-172-20-61-222.eu-west-2.compute.internal" of role "node" is not ready
Node	ip-172-20-63-4.eu-west-2.compute.internal				node "ip-172-20-63-4.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-7bjbh					system-cluster-critical pod "coredns-5dc785954d-7bjbh" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-2g6cx				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-2g6cx" is pending
Pod	kube-system/kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal" is pending

Validation Failed
W0804 23:05:09.504114    4170 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 520 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 144 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 184 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 166 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:07:41.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3956" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:41.750: INFO: Only supported for providers [gce gke] (not aws)
... skipping 80 lines ...
Aug  4 23:07:39.319: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-83c2999f-f239-4d60-aa70-5abacbb0f1aa
STEP: Creating a pod to test consume secrets
Aug  4 23:07:39.726: INFO: Waiting up to 5m0s for pod "pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c" in namespace "secrets-9864" to be "Succeeded or Failed"
Aug  4 23:07:39.828: INFO: Pod "pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 101.834103ms
Aug  4 23:07:41.927: INFO: Pod "pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200935043s
Aug  4 23:07:44.026: INFO: Pod "pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299875154s
Aug  4 23:07:46.124: INFO: Pod "pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398573782s
STEP: Saw pod success
Aug  4 23:07:46.124: INFO: Pod "pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c" satisfied condition "Succeeded or Failed"
Aug  4 23:07:46.224: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c container secret-volume-test: <nil>
STEP: delete the pod
Aug  4 23:07:46.486: INFO: Waiting for pod pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c to disappear
Aug  4 23:07:46.583: INFO: Pod pod-secrets-7cde89ca-6299-491c-affb-14b1efbffd1c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.962 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Aug  4 23:07:39.965: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-9ab685db-9940-420a-bf0a-ac0849416d64
STEP: Creating a pod to test consume secrets
Aug  4 23:07:40.394: INFO: Waiting up to 5m0s for pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf" in namespace "secrets-6374" to be "Succeeded or Failed"
Aug  4 23:07:40.510: INFO: Pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 115.044725ms
Aug  4 23:07:42.613: INFO: Pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218918171s
Aug  4 23:07:44.731: INFO: Pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336109834s
Aug  4 23:07:46.833: INFO: Pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438356783s
Aug  4 23:07:48.930: INFO: Pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.535695571s
STEP: Saw pod success
Aug  4 23:07:48.930: INFO: Pod "pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf" satisfied condition "Succeeded or Failed"
Aug  4 23:07:49.027: INFO: Trying to get logs from node ip-172-20-45-94.eu-west-2.compute.internal pod pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf container secret-volume-test: <nil>
STEP: delete the pod
Aug  4 23:07:49.483: INFO: Waiting for pod pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf to disappear
Aug  4 23:07:49.580: INFO: Pod pod-secrets-130045f6-ee0e-47cd-88a2-3d3e5ccd2aaf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.862 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:49.906: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
Aug  4 23:07:41.062: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Aug  4 23:07:41.351: INFO: Waiting up to 5m0s for pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a" in namespace "security-context-test-3370" to be "Succeeded or Failed"
Aug  4 23:07:41.447: INFO: Pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a": Phase="Pending", Reason="", readiness=false. Elapsed: 96.060165ms
Aug  4 23:07:43.545: INFO: Pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193595023s
Aug  4 23:07:45.645: INFO: Pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294142256s
Aug  4 23:07:47.742: INFO: Pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390398608s
Aug  4 23:07:49.839: INFO: Pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.487404207s
Aug  4 23:07:49.839: INFO: Pod "busybox-user-0-baf473f9-a50b-42cc-babb-d809acf1904a" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:07:49.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3370" for this suite.


... skipping 49 lines ...
• [SLOW TEST:11.976 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:52.918: INFO: Only supported for providers [openstack] (not aws)
... skipping 44 lines ...
W0804 23:07:39.357202    4863 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  4 23:07:39.357: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug  4 23:07:39.650: INFO: Waiting up to 5m0s for pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f" in namespace "emptydir-6237" to be "Succeeded or Failed"
Aug  4 23:07:39.746: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Pending", Reason="", readiness=false. Elapsed: 95.8618ms
Aug  4 23:07:41.843: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192972531s
Aug  4 23:07:43.940: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289870045s
Aug  4 23:07:46.038: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387870164s
Aug  4 23:07:48.139: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489222729s
Aug  4 23:07:50.236: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58614486s
Aug  4 23:07:52.334: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.684620725s
STEP: Saw pod success
Aug  4 23:07:52.335: INFO: Pod "pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f" satisfied condition "Succeeded or Failed"
Aug  4 23:07:52.431: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f container test-container: <nil>
STEP: delete the pod
Aug  4 23:07:53.041: INFO: Waiting for pod pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f to disappear
Aug  4 23:07:53.137: INFO: Pod pod-3de77b81-c7a0-4c96-9d85-b9f72e89725f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.476 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 55 lines ...
• [SLOW TEST:15.129 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "services-5403" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:54.295: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 49 lines ...
Aug  4 23:07:41.419: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Aug  4 23:07:41.709: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14" in namespace "security-context-test-7334" to be "Succeeded or Failed"
Aug  4 23:07:41.806: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Pending", Reason="", readiness=false. Elapsed: 96.435259ms
Aug  4 23:07:43.904: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194505262s
Aug  4 23:07:46.001: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292040411s
Aug  4 23:07:48.098: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388681032s
Aug  4 23:07:50.202: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492555386s
Aug  4 23:07:52.299: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589784342s
Aug  4 23:07:54.397: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.687636231s
Aug  4 23:07:54.397: INFO: Pod "alpine-nnp-true-243c14df-b43f-4a34-92cd-7b0a366e6b14" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:07:54.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7334" for this suite.


... skipping 35 lines ...
• [SLOW TEST:17.493 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:56.545: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:07:54.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:07:56.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3465" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:57.013: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 95 lines ...
• [SLOW TEST:6.624 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:57.579: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Aug  4 23:07:54.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug  4 23:07:54.749: INFO: Waiting up to 5m0s for pod "pod-0ee4f522-1a08-46ae-8539-3e6728fb6143" in namespace "emptydir-8921" to be "Succeeded or Failed"
Aug  4 23:07:54.846: INFO: Pod "pod-0ee4f522-1a08-46ae-8539-3e6728fb6143": Phase="Pending", Reason="", readiness=false. Elapsed: 96.096489ms
Aug  4 23:07:56.942: INFO: Pod "pod-0ee4f522-1a08-46ae-8539-3e6728fb6143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192465192s
Aug  4 23:07:59.040: INFO: Pod "pod-0ee4f522-1a08-46ae-8539-3e6728fb6143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.290348901s
STEP: Saw pod success
Aug  4 23:07:59.040: INFO: Pod "pod-0ee4f522-1a08-46ae-8539-3e6728fb6143" satisfied condition "Succeeded or Failed"
Aug  4 23:07:59.141: INFO: Trying to get logs from node ip-172-20-61-222.eu-west-2.compute.internal pod pod-0ee4f522-1a08-46ae-8539-3e6728fb6143 container test-container: <nil>
STEP: delete the pod
Aug  4 23:07:59.353: INFO: Waiting for pod pod-0ee4f522-1a08-46ae-8539-3e6728fb6143 to disappear
Aug  4 23:07:59.449: INFO: Pod pod-0ee4f522-1a08-46ae-8539-3e6728fb6143 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.475 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:59.651: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:07:50.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug  4 23:07:50.734: INFO: Waiting up to 5m0s for pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f" in namespace "emptydir-1243" to be "Succeeded or Failed"
Aug  4 23:07:50.830: INFO: Pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 95.734127ms
Aug  4 23:07:52.928: INFO: Pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193259915s
Aug  4 23:07:55.026: INFO: Pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291922566s
Aug  4 23:07:57.124: INFO: Pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389440895s
Aug  4 23:07:59.221: INFO: Pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.486874176s
STEP: Saw pod success
Aug  4 23:07:59.221: INFO: Pod "pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f" satisfied condition "Succeeded or Failed"
Aug  4 23:07:59.324: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f container test-container: <nil>
STEP: delete the pod
Aug  4 23:07:59.535: INFO: Waiting for pod pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f to disappear
Aug  4 23:07:59.633: INFO: Pod pod-6000dbe3-c8ea-4977-adeb-af191ba0f93f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.700 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:07:59.861: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
Aug  4 23:08:00.885: INFO: AfterEach: Cleaning up test resources.
Aug  4 23:08:00.885: INFO: Deleting PersistentVolumeClaim "pvc-hsvfk"
Aug  4 23:08:00.982: INFO: Deleting PersistentVolume "hostpath-ztq4z"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:07:39.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W0804 23:07:41.363988    4923 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  4 23:07:41.364: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:01.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8052" for this suite.


• [SLOW TEST:23.032 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:07:42.675: INFO: >>> kubeConfig: /root/.kube/config
... skipping 36 lines ...
Aug  4 23:07:55.517: INFO: Running '/tmp/kubectl57887569/kubectl --server=https://api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6160 explain e2e-test-crd-publish-openapi-5975-crds.spec'
Aug  4 23:07:56.076: INFO: stderr: ""
Aug  4 23:07:56.076: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5975-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug  4 23:07:56.076: INFO: Running '/tmp/kubectl57887569/kubectl --server=https://api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6160 explain e2e-test-crd-publish-openapi-5975-crds.spec.bars'
Aug  4 23:07:56.621: INFO: stderr: ""
Aug  4 23:07:56.621: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5975-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug  4 23:07:56.621: INFO: Running '/tmp/kubectl57887569/kubectl --server=https://api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6160 explain e2e-test-crd-publish-openapi-5975-crds.spec.bars2'
Aug  4 23:07:57.158: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:02.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6160" for this suite.
... skipping 28 lines ...
Aug  4 23:07:47.547: INFO: PersistentVolumeClaim pvc-dr8tz found but phase is Pending instead of Bound.
Aug  4 23:07:49.644: INFO: PersistentVolumeClaim pvc-dr8tz found and phase=Bound (2.193865119s)
Aug  4 23:07:49.644: INFO: Waiting up to 3m0s for PersistentVolume local-mv8rp to have phase Bound
Aug  4 23:07:49.744: INFO: PersistentVolume local-mv8rp found and phase=Bound (100.018252ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-74tv
STEP: Creating a pod to test subpath
Aug  4 23:07:50.036: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-74tv" in namespace "provisioning-6370" to be "Succeeded or Failed"
Aug  4 23:07:50.132: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv": Phase="Pending", Reason="", readiness=false. Elapsed: 96.274669ms
Aug  4 23:07:52.230: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193946858s
Aug  4 23:07:54.329: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292985817s
Aug  4 23:07:56.427: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390765645s
Aug  4 23:07:58.528: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492151028s
Aug  4 23:08:00.626: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.58968234s
STEP: Saw pod success
Aug  4 23:08:00.626: INFO: Pod "pod-subpath-test-preprovisionedpv-74tv" satisfied condition "Succeeded or Failed"
Aug  4 23:08:00.726: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-74tv container test-container-subpath-preprovisionedpv-74tv: <nil>
STEP: delete the pod
Aug  4 23:08:00.926: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-74tv to disappear
Aug  4 23:08:01.022: INFO: Pod pod-subpath-test-preprovisionedpv-74tv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-74tv
Aug  4 23:08:01.022: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-74tv" in namespace "provisioning-6370"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:02.551: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":1,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:08:02.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1066" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Aug  4 23:07:49.100: INFO: PersistentVolumeClaim pvc-tsndh found but phase is Pending instead of Bound.
Aug  4 23:07:51.197: INFO: PersistentVolumeClaim pvc-tsndh found and phase=Bound (2.193105693s)
Aug  4 23:07:51.197: INFO: Waiting up to 3m0s for PersistentVolume local-9rc5g to have phase Bound
Aug  4 23:07:51.294: INFO: PersistentVolume local-9rc5g found and phase=Bound (97.189079ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z9rd
STEP: Creating a pod to test subpath
Aug  4 23:07:51.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z9rd" in namespace "provisioning-8932" to be "Succeeded or Failed"
Aug  4 23:07:51.686: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd": Phase="Pending", Reason="", readiness=false. Elapsed: 96.252773ms
Aug  4 23:07:53.783: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193056378s
Aug  4 23:07:55.880: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290388303s
Aug  4 23:07:57.987: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396743338s
Aug  4 23:08:00.084: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494197375s
Aug  4 23:08:02.184: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.593764499s
STEP: Saw pod success
Aug  4 23:08:02.184: INFO: Pod "pod-subpath-test-preprovisionedpv-z9rd" satisfied condition "Succeeded or Failed"
Aug  4 23:08:02.280: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-z9rd container test-container-subpath-preprovisionedpv-z9rd: <nil>
STEP: delete the pod
Aug  4 23:08:02.491: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z9rd to disappear
Aug  4 23:08:02.588: INFO: Pod pod-subpath-test-preprovisionedpv-z9rd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z9rd
Aug  4 23:08:02.588: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z9rd" in namespace "provisioning-8932"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:08:02.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:07.651: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 29 lines ...
• [SLOW TEST:8.208 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:07.918: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
Aug  4 23:07:49.099: INFO: PersistentVolumeClaim pvc-djdmh found but phase is Pending instead of Bound.
Aug  4 23:07:51.197: INFO: PersistentVolumeClaim pvc-djdmh found and phase=Bound (2.195637855s)
Aug  4 23:07:51.197: INFO: Waiting up to 3m0s for PersistentVolume local-8v5hc to have phase Bound
Aug  4 23:07:51.295: INFO: PersistentVolume local-8v5hc found and phase=Bound (97.502312ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-w98r
STEP: Creating a pod to test exec-volume-test
Aug  4 23:07:51.587: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-w98r" in namespace "volume-5412" to be "Succeeded or Failed"
Aug  4 23:07:51.684: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Pending", Reason="", readiness=false. Elapsed: 97.568247ms
Aug  4 23:07:53.782: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195432901s
Aug  4 23:07:55.881: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293922268s
Aug  4 23:07:57.987: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400321378s
Aug  4 23:08:00.085: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.498144382s
Aug  4 23:08:02.190: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.603340001s
Aug  4 23:08:04.289: INFO: Pod "exec-volume-test-preprovisionedpv-w98r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.702018316s
STEP: Saw pod success
Aug  4 23:08:04.289: INFO: Pod "exec-volume-test-preprovisionedpv-w98r" satisfied condition "Succeeded or Failed"
Aug  4 23:08:04.387: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-w98r container exec-container-preprovisionedpv-w98r: <nil>
STEP: delete the pod
Aug  4 23:08:04.589: INFO: Waiting for pod exec-volume-test-preprovisionedpv-w98r to disappear
Aug  4 23:08:04.686: INFO: Pod exec-volume-test-preprovisionedpv-w98r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-w98r
Aug  4 23:08:04.686: INFO: Deleting pod "exec-volume-test-preprovisionedpv-w98r" in namespace "volume-5412"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug  4 23:08:01.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795" in namespace "projected-6728" to be "Succeeded or Failed"
Aug  4 23:08:01.778: INFO: Pod "downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795": Phase="Pending", Reason="", readiness=false. Elapsed: 95.99763ms
Aug  4 23:08:03.874: INFO: Pod "downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192756787s
Aug  4 23:08:05.973: INFO: Pod "downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290993833s
Aug  4 23:08:08.074: INFO: Pod "downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.392679865s
STEP: Saw pod success
Aug  4 23:08:08.074: INFO: Pod "downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795" satisfied condition "Succeeded or Failed"
Aug  4 23:08:08.174: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795 container client-container: <nil>
STEP: delete the pod
Aug  4 23:08:08.422: INFO: Waiting for pod downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795 to disappear
Aug  4 23:08:08.528: INFO: Pod downwardapi-volume-29c21dc2-9f18-4769-b215-cbfbe7819795 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.647 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:08.754: INFO: Only supported for providers [vsphere] (not aws)
... skipping 122 lines ...
• [SLOW TEST:30.595 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:09.571: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:11.062: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
• [SLOW TEST:32.782 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:11.517: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Aug  4 23:08:03.152: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-4827" to be "Succeeded or Failed"
Aug  4 23:08:03.249: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 96.339442ms
Aug  4 23:08:05.347: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19516257s
Aug  4 23:08:07.447: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294513813s
Aug  4 23:08:09.546: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393288876s
Aug  4 23:08:11.644: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.491249821s
Aug  4 23:08:11.644: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:11.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4827" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:11.962: INFO: Only supported for providers [azure] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:13.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-2098" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Aug  4 23:08:08.595: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-214" to be "Succeeded or Failed"
Aug  4 23:08:08.692: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 97.146847ms
Aug  4 23:08:10.792: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19751904s
Aug  4 23:08:12.891: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296488035s
Aug  4 23:08:14.989: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.394037303s
STEP: Saw pod success
Aug  4 23:08:14.989: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug  4 23:08:15.085: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Aug  4 23:08:15.286: INFO: Waiting for pod pod-host-path-test to disappear
Aug  4 23:08:15.382: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.615 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:15.587: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
• [SLOW TEST:23.184 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":2,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:18.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:18.490: INFO: Driver "local" does not provide raw block - skipping
... skipping 103 lines ...
• [SLOW TEST:12.132 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":4,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:19.877: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
Aug  4 23:07:57.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 23:07:59.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 23:08:01.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 23:08:03.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 23:08:05.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763715260, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 23:08:12.433: INFO: Waited 5.109236548s for the sample-apiserver to be ready to handle requests.
Aug  4 23:08:12.739: FAIL: attempting to get a newly created flunders resource
Unexpected error:
    <*errors.StatusError | 0xc002a26fa0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
... skipping 298 lines ...
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Aug  4 23:08:12.739: attempting to get a newly created flunders resource
  Unexpected error:
      <*errors.StatusError | 0xc002a26fa0>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {
                  SelfLink: "",
                  ResourceVersion: "",
... skipping 22 lines ...
      }
      the server is currently unable to handle the request
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:435
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":0,"skipped":3,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:20.017: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Aug  4 23:08:03.185: INFO: PersistentVolumeClaim pvc-bkvzn found but phase is Pending instead of Bound.
Aug  4 23:08:05.284: INFO: PersistentVolumeClaim pvc-bkvzn found and phase=Bound (10.614291406s)
Aug  4 23:08:05.284: INFO: Waiting up to 3m0s for PersistentVolume local-rlklh to have phase Bound
Aug  4 23:08:05.382: INFO: PersistentVolume local-rlklh found and phase=Bound (98.282768ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pxhn
STEP: Creating a pod to test subpath
Aug  4 23:08:05.679: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pxhn" in namespace "provisioning-1141" to be "Succeeded or Failed"
Aug  4 23:08:05.777: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Pending", Reason="", readiness=false. Elapsed: 98.311626ms
Aug  4 23:08:07.893: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214528781s
Aug  4 23:08:09.994: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314816425s
Aug  4 23:08:12.102: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42283056s
Aug  4 23:08:14.207: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528700971s
Aug  4 23:08:16.306: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.627565414s
Aug  4 23:08:18.411: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.732291383s
STEP: Saw pod success
Aug  4 23:08:18.411: INFO: Pod "pod-subpath-test-preprovisionedpv-pxhn" satisfied condition "Succeeded or Failed"
Aug  4 23:08:18.509: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-pxhn container test-container-subpath-preprovisionedpv-pxhn: <nil>
STEP: delete the pod
Aug  4 23:08:18.735: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pxhn to disappear
Aug  4 23:08:18.835: INFO: Pod pod-subpath-test-preprovisionedpv-pxhn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pxhn
Aug  4 23:08:18.835: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pxhn" in namespace "provisioning-1141"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:08:11.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Aug  4 23:08:11.724: INFO: Waiting up to 5m0s for pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de" in namespace "downward-api-8208" to be "Succeeded or Failed"
Aug  4 23:08:11.821: INFO: Pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de": Phase="Pending", Reason="", readiness=false. Elapsed: 97.061428ms
Aug  4 23:08:13.926: INFO: Pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201906484s
Aug  4 23:08:16.023: INFO: Pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29910095s
Aug  4 23:08:18.120: INFO: Pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395738743s
Aug  4 23:08:20.218: INFO: Pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.493884107s
STEP: Saw pod success
Aug  4 23:08:20.218: INFO: Pod "downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de" satisfied condition "Succeeded or Failed"
Aug  4 23:08:20.336: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de container dapi-container: <nil>
STEP: delete the pod
Aug  4 23:08:20.548: INFO: Waiting for pod downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de to disappear
Aug  4 23:08:20.645: INFO: Pod downward-api-11ebf0f5-a557-45ad-a4a7-6c476c7b89de no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.714 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:20.883: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:23.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2548" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:23.284: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:25.141: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-1f884f6a-9692-4220-ba04-175105c66043
STEP: Creating a pod to test consume secrets
Aug  4 23:08:23.977: INFO: Waiting up to 5m0s for pod "pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7" in namespace "secrets-1036" to be "Succeeded or Failed"
Aug  4 23:08:24.074: INFO: Pod "pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7": Phase="Pending", Reason="", readiness=false. Elapsed: 96.891107ms
Aug  4 23:08:26.171: INFO: Pod "pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194787211s
STEP: Saw pod success
Aug  4 23:08:26.171: INFO: Pod "pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7" satisfied condition "Succeeded or Failed"
Aug  4 23:08:26.268: INFO: Trying to get logs from node ip-172-20-61-222.eu-west-2.compute.internal pod pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7 container secret-volume-test: <nil>
STEP: delete the pod
Aug  4 23:08:26.468: INFO: Waiting for pod pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7 to disappear
Aug  4 23:08:26.565: INFO: Pod pod-secrets-3d847dca-eb45-4d3c-919a-fedbda305af7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:26.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1036" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:26.784: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:08:20.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Aug  4 23:08:20.806: INFO: Waiting up to 5m0s for pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e" in namespace "security-context-3494" to be "Succeeded or Failed"
Aug  4 23:08:20.904: INFO: Pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e": Phase="Pending", Reason="", readiness=false. Elapsed: 98.319914ms
Aug  4 23:08:23.004: INFO: Pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197614626s
Aug  4 23:08:25.103: INFO: Pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296546377s
Aug  4 23:08:27.203: INFO: Pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396613269s
Aug  4 23:08:29.302: INFO: Pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.495668099s
STEP: Saw pod success
Aug  4 23:08:29.302: INFO: Pod "security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e" satisfied condition "Succeeded or Failed"
Aug  4 23:08:29.402: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e container test-container: <nil>
STEP: delete the pod
Aug  4 23:08:29.603: INFO: Waiting for pod security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e to disappear
Aug  4 23:08:29.701: INFO: Pod security-context-4a14937c-cbd4-443e-9b06-b06d7c51ea6e no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.697 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:29.930: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:29.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7566" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:30.148: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 152 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:32.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4530" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:33.060: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 39 lines ...
Aug  4 23:08:18.468: INFO: PersistentVolumeClaim pvc-dfdd6 found but phase is Pending instead of Bound.
Aug  4 23:08:20.566: INFO: PersistentVolumeClaim pvc-dfdd6 found and phase=Bound (4.293909627s)
Aug  4 23:08:20.566: INFO: Waiting up to 3m0s for PersistentVolume local-9zzqq to have phase Bound
Aug  4 23:08:20.665: INFO: PersistentVolume local-9zzqq found and phase=Bound (98.693079ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ll7k
STEP: Creating a pod to test exec-volume-test
Aug  4 23:08:20.957: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ll7k" in namespace "volume-8156" to be "Succeeded or Failed"
Aug  4 23:08:21.055: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k": Phase="Pending", Reason="", readiness=false. Elapsed: 97.540318ms
Aug  4 23:08:23.153: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195191923s
Aug  4 23:08:25.254: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29680512s
Aug  4 23:08:27.352: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394283989s
Aug  4 23:08:29.449: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491469647s
Aug  4 23:08:31.548: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.590849876s
STEP: Saw pod success
Aug  4 23:08:31.548: INFO: Pod "exec-volume-test-preprovisionedpv-ll7k" satisfied condition "Succeeded or Failed"
Aug  4 23:08:31.646: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-ll7k container exec-container-preprovisionedpv-ll7k: <nil>
STEP: delete the pod
Aug  4 23:08:31.846: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ll7k to disappear
Aug  4 23:08:31.944: INFO: Pod exec-volume-test-preprovisionedpv-ll7k no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ll7k
Aug  4 23:08:31.944: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ll7k" in namespace "volume-8156"
... skipping 26 lines ...
Aug  4 23:08:25.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Aug  4 23:08:25.737: INFO: Waiting up to 5m0s for pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b" in namespace "downward-api-5831" to be "Succeeded or Failed"
Aug  4 23:08:25.833: INFO: Pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.246187ms
Aug  4 23:08:27.931: INFO: Pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193481787s
Aug  4 23:08:30.028: INFO: Pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290701695s
Aug  4 23:08:32.124: INFO: Pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38723831s
Aug  4 23:08:34.221: INFO: Pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.48359758s
STEP: Saw pod success
Aug  4 23:08:34.221: INFO: Pod "downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b" satisfied condition "Succeeded or Failed"
Aug  4 23:08:34.327: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b container dapi-container: <nil>
STEP: delete the pod
Aug  4 23:08:34.528: INFO: Waiting for pod downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b to disappear
Aug  4 23:08:34.624: INFO: Pod downward-api-835fe84a-a0c3-4e08-8279-69853328bd4b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.669 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:34.854: INFO: Only supported for providers [gce gke] (not aws)
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:35.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8500" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":5,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:35.810: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 143 lines ...
• [SLOW TEST:32.175 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:36.782: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
W0804 23:07:40.263229    4881 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  4 23:07:40.263: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Aug  4 23:07:40.478: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug  4 23:07:40.773: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-8622" in namespace "volume-8622" to be "Succeeded or Failed"
Aug  4 23:07:40.870: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Pending", Reason="", readiness=false. Elapsed: 97.313803ms
Aug  4 23:07:42.969: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196421258s
Aug  4 23:07:45.076: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30318827s
Aug  4 23:07:47.172: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399754117s
Aug  4 23:07:49.269: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.496267749s
STEP: Saw pod success
Aug  4 23:07:49.269: INFO: Pod "hostpath-symlink-prep-volume-8622" satisfied condition "Succeeded or Failed"
Aug  4 23:07:49.269: INFO: Deleting pod "hostpath-symlink-prep-volume-8622" in namespace "volume-8622"
Aug  4 23:07:49.375: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-8622" to be fully deleted
Aug  4 23:07:49.471: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Aug  4 23:07:53.772: INFO: Running '/tmp/kubectl57887569/kubectl --server=https://api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-8622 exec hostpathsymlink-injector --namespace=volume-8622 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-8622' > /opt/0/index.html'
... skipping 44 lines ...
Aug  4 23:08:28.597: INFO: Pod hostpathsymlink-client still exists
Aug  4 23:08:30.501: INFO: Waiting for pod hostpathsymlink-client to disappear
Aug  4 23:08:30.597: INFO: Pod hostpathsymlink-client still exists
Aug  4 23:08:32.501: INFO: Waiting for pod hostpathsymlink-client to disappear
Aug  4 23:08:32.597: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Aug  4 23:08:32.696: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-8622" in namespace "volume-8622" to be "Succeeded or Failed"
Aug  4 23:08:32.793: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Pending", Reason="", readiness=false. Elapsed: 96.795685ms
Aug  4 23:08:34.891: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194242877s
Aug  4 23:08:36.987: INFO: Pod "hostpath-symlink-prep-volume-8622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.290477684s
STEP: Saw pod success
Aug  4 23:08:36.987: INFO: Pod "hostpath-symlink-prep-volume-8622" satisfied condition "Succeeded or Failed"
Aug  4 23:08:36.987: INFO: Deleting pod "hostpath-symlink-prep-volume-8622" in namespace "volume-8622"
Aug  4 23:08:37.088: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-8622" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:37.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8622" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:37.492: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:38.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6940" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:38.938: INFO: Only supported for providers [vsphere] (not aws)
... skipping 25 lines ...
Aug  4 23:08:13.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Aug  4 23:08:14.099: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug  4 23:08:14.308: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7051" in namespace "provisioning-7051" to be "Succeeded or Failed"
Aug  4 23:08:14.404: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 96.686394ms
Aug  4 23:08:16.505: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196938057s
Aug  4 23:08:18.603: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295446734s
Aug  4 23:08:20.701: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392877676s
Aug  4 23:08:22.798: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489979324s
Aug  4 23:08:24.898: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590521723s
Aug  4 23:08:26.996: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.688406216s
STEP: Saw pod success
Aug  4 23:08:26.996: INFO: Pod "hostpath-symlink-prep-provisioning-7051" satisfied condition "Succeeded or Failed"
Aug  4 23:08:26.996: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7051" in namespace "provisioning-7051"
Aug  4 23:08:27.097: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7051" to be fully deleted
Aug  4 23:08:27.194: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tdtb
STEP: Creating a pod to test subpath
Aug  4 23:08:27.292: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tdtb" in namespace "provisioning-7051" to be "Succeeded or Failed"
Aug  4 23:08:27.389: INFO: Pod "pod-subpath-test-inlinevolume-tdtb": Phase="Pending", Reason="", readiness=false. Elapsed: 96.472717ms
Aug  4 23:08:29.486: INFO: Pod "pod-subpath-test-inlinevolume-tdtb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193454041s
Aug  4 23:08:31.584: INFO: Pod "pod-subpath-test-inlinevolume-tdtb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292094122s
Aug  4 23:08:33.682: INFO: Pod "pod-subpath-test-inlinevolume-tdtb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389178484s
Aug  4 23:08:35.784: INFO: Pod "pod-subpath-test-inlinevolume-tdtb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.491943693s
STEP: Saw pod success
Aug  4 23:08:35.784: INFO: Pod "pod-subpath-test-inlinevolume-tdtb" satisfied condition "Succeeded or Failed"
Aug  4 23:08:35.881: INFO: Trying to get logs from node ip-172-20-45-94.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-tdtb container test-container-subpath-inlinevolume-tdtb: <nil>
STEP: delete the pod
Aug  4 23:08:36.086: INFO: Waiting for pod pod-subpath-test-inlinevolume-tdtb to disappear
Aug  4 23:08:36.182: INFO: Pod pod-subpath-test-inlinevolume-tdtb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tdtb
Aug  4 23:08:36.182: INFO: Deleting pod "pod-subpath-test-inlinevolume-tdtb" in namespace "provisioning-7051"
STEP: Deleting pod
Aug  4 23:08:36.279: INFO: Deleting pod "pod-subpath-test-inlinevolume-tdtb" in namespace "provisioning-7051"
Aug  4 23:08:36.472: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7051" in namespace "provisioning-7051" to be "Succeeded or Failed"
Aug  4 23:08:36.569: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 96.522332ms
Aug  4 23:08:38.666: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193284732s
Aug  4 23:08:40.762: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289805616s
Aug  4 23:08:42.859: INFO: Pod "hostpath-symlink-prep-provisioning-7051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.387109862s
STEP: Saw pod success
Aug  4 23:08:42.860: INFO: Pod "hostpath-symlink-prep-provisioning-7051" satisfied condition "Succeeded or Failed"
Aug  4 23:08:42.860: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7051" in namespace "provisioning-7051"
Aug  4 23:08:42.959: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7051" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:43.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7051" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug  4 23:08:37.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b" in namespace "projected-3012" to be "Succeeded or Failed"
Aug  4 23:08:37.478: INFO: Pod "downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b": Phase="Pending", Reason="", readiness=false. Elapsed: 97.439835ms
Aug  4 23:08:39.576: INFO: Pod "downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195145591s
Aug  4 23:08:41.675: INFO: Pod "downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294285422s
Aug  4 23:08:43.774: INFO: Pod "downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39393579s
STEP: Saw pod success
Aug  4 23:08:43.775: INFO: Pod "downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b" satisfied condition "Succeeded or Failed"
Aug  4 23:08:43.879: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b container client-container: <nil>
STEP: delete the pod
Aug  4 23:08:44.091: INFO: Waiting for pod downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b to disappear
Aug  4 23:08:44.189: INFO: Pod downwardapi-volume-f10adc9c-6e57-4940-80d7-e5333caf652b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.597 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:44.400: INFO: Only supported for providers [azure] (not aws)
... skipping 129 lines ...
Aug  4 23:08:02.622: INFO: PersistentVolumeClaim csi-hostpathwtrrb found but phase is Pending instead of Bound.
Aug  4 23:08:04.721: INFO: PersistentVolumeClaim csi-hostpathwtrrb found but phase is Pending instead of Bound.
Aug  4 23:08:06.817: INFO: PersistentVolumeClaim csi-hostpathwtrrb found but phase is Pending instead of Bound.
Aug  4 23:08:08.915: INFO: PersistentVolumeClaim csi-hostpathwtrrb found and phase=Bound (23.178315317s)
STEP: Creating pod pod-subpath-test-dynamicpv-bbm9
STEP: Creating a pod to test subpath
Aug  4 23:08:09.222: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bbm9" in namespace "provisioning-6364" to be "Succeeded or Failed"
Aug  4 23:08:09.341: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 118.252718ms
Aug  4 23:08:11.441: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218895481s
Aug  4 23:08:13.581: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358042142s
Aug  4 23:08:15.678: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455610831s
Aug  4 23:08:17.776: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553207414s
Aug  4 23:08:19.872: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649944077s
Aug  4 23:08:21.973: INFO: Pod "pod-subpath-test-dynamicpv-bbm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.750463204s
STEP: Saw pod success
Aug  4 23:08:21.973: INFO: Pod "pod-subpath-test-dynamicpv-bbm9" satisfied condition "Succeeded or Failed"
Aug  4 23:08:22.072: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-bbm9 container test-container-subpath-dynamicpv-bbm9: <nil>
STEP: delete the pod
Aug  4 23:08:22.279: INFO: Waiting for pod pod-subpath-test-dynamicpv-bbm9 to disappear
Aug  4 23:08:22.375: INFO: Pod pod-subpath-test-dynamicpv-bbm9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-bbm9
Aug  4 23:08:22.375: INFO: Deleting pod "pod-subpath-test-dynamicpv-bbm9" in namespace "provisioning-6364"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:44.619: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 47 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:45.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1038" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:08:38.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Aug  4 23:08:39.544: INFO: Waiting up to 5m0s for pod "downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f" in namespace "downward-api-7871" to be "Succeeded or Failed"
Aug  4 23:08:39.640: INFO: Pod "downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f": Phase="Pending", Reason="", readiness=false. Elapsed: 95.875053ms
Aug  4 23:08:41.736: INFO: Pod "downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192300388s
Aug  4 23:08:43.834: INFO: Pod "downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290110813s
Aug  4 23:08:45.932: INFO: Pod "downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.387769203s
STEP: Saw pod success
Aug  4 23:08:45.932: INFO: Pod "downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f" satisfied condition "Succeeded or Failed"
Aug  4 23:08:46.028: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f container dapi-container: <nil>
STEP: delete the pod
Aug  4 23:08:46.229: INFO: Waiting for pod downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f to disappear
Aug  4 23:08:46.325: INFO: Pod downward-api-9498e93d-0396-4132-90fb-38dfbfedb75f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.565 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:46.529: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 316 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:51.775: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
• [SLOW TEST:8.208 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 7 lines ...
Aug  4 23:08:09.266: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Aug  4 23:08:09.944: INFO: Successfully created a new PD: "aws://eu-west-2a/vol-0007aa2c695e6393d".
Aug  4 23:08:09.944: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-966r
STEP: Creating a pod to test exec-volume-test
Aug  4 23:08:10.046: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-966r" in namespace "volume-6411" to be "Succeeded or Failed"
Aug  4 23:08:10.142: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 96.461234ms
Aug  4 23:08:12.254: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208633184s
Aug  4 23:08:14.352: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306151761s
Aug  4 23:08:16.449: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403082833s
Aug  4 23:08:18.549: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502878933s
Aug  4 23:08:20.646: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.599977881s
... skipping 2 lines ...
Aug  4 23:08:26.937: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.891018792s
Aug  4 23:08:29.037: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 18.991164195s
Aug  4 23:08:31.135: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 21.089196049s
Aug  4 23:08:33.232: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Pending", Reason="", readiness=false. Elapsed: 23.186046025s
Aug  4 23:08:35.329: INFO: Pod "exec-volume-test-inlinevolume-966r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.283105552s
STEP: Saw pod success
Aug  4 23:08:35.329: INFO: Pod "exec-volume-test-inlinevolume-966r" satisfied condition "Succeeded or Failed"
Aug  4 23:08:35.425: INFO: Trying to get logs from node ip-172-20-45-94.eu-west-2.compute.internal pod exec-volume-test-inlinevolume-966r container exec-container-inlinevolume-966r: <nil>
STEP: delete the pod
Aug  4 23:08:35.653: INFO: Waiting for pod exec-volume-test-inlinevolume-966r to disappear
Aug  4 23:08:35.749: INFO: Pod exec-volume-test-inlinevolume-966r no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-966r
Aug  4 23:08:35.749: INFO: Deleting pod "exec-volume-test-inlinevolume-966r" in namespace "volume-6411"
Aug  4 23:08:36.059: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0007aa2c695e6393d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0007aa2c695e6393d is currently attached to i-0a9546c770069db7c
	status code: 400, request id: 31e5ed64-e8e6-48d2-8f67-c95d495d09e3
Aug  4 23:08:41.615: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0007aa2c695e6393d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0007aa2c695e6393d is currently attached to i-0a9546c770069db7c
	status code: 400, request id: f2979aa5-7ac3-4492-8751-019b0f55d467
Aug  4 23:08:47.170: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0007aa2c695e6393d", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0007aa2c695e6393d is currently attached to i-0a9546c770069db7c
	status code: 400, request id: 7b9f0f2a-03c8-4125-9b0b-55b691a9a608
Aug  4 23:08:52.723: INFO: Successfully deleted PD "aws://eu-west-2a/vol-0007aa2c695e6393d".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:52.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6411" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:52.933: INFO: Only supported for providers [azure] (not aws)
... skipping 44 lines ...
Aug  4 23:08:48.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Aug  4 23:08:49.146: INFO: Waiting up to 5m0s for pod "security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1" in namespace "security-context-9531" to be "Succeeded or Failed"
Aug  4 23:08:49.243: INFO: Pod "security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1": Phase="Pending", Reason="", readiness=false. Elapsed: 96.569821ms
Aug  4 23:08:51.342: INFO: Pod "security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195428294s
Aug  4 23:08:53.440: INFO: Pod "security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293971327s
STEP: Saw pod success
Aug  4 23:08:53.440: INFO: Pod "security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1" satisfied condition "Succeeded or Failed"
Aug  4 23:08:53.541: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1 container test-container: <nil>
STEP: delete the pod
Aug  4 23:08:53.777: INFO: Waiting for pod security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1 to disappear
Aug  4 23:08:53.878: INFO: Pod security-context-385e9e53-f943-4dd2-9d10-f9d6c12890d1 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.512 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":5,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
STEP: Destroying namespace "services-8791" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":6,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:08:56.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-5972" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":7,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:57.174: INFO: Only supported for providers [openstack] (not aws)
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:08:57.460: INFO: Only supported for providers [openstack] (not aws)
... skipping 117 lines ...
• [SLOW TEST:8.505 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 161 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:07.330: INFO: Only supported for providers [gce gke] (not aws)
... skipping 71 lines ...
Aug  4 23:08:47.400: INFO: PersistentVolumeClaim pvc-rf5w8 found but phase is Pending instead of Bound.
Aug  4 23:08:49.501: INFO: PersistentVolumeClaim pvc-rf5w8 found and phase=Bound (6.396627441s)
Aug  4 23:08:49.501: INFO: Waiting up to 3m0s for PersistentVolume local-l6qgc to have phase Bound
Aug  4 23:08:49.600: INFO: PersistentVolume local-l6qgc found and phase=Bound (98.862137ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-btcd
STEP: Creating a pod to test subpath
Aug  4 23:08:49.905: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-btcd" in namespace "provisioning-6726" to be "Succeeded or Failed"
Aug  4 23:08:50.014: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 109.21963ms
Aug  4 23:08:52.112: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207850343s
Aug  4 23:08:54.212: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307070634s
Aug  4 23:08:56.311: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405991604s
Aug  4 23:08:58.411: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.506495943s
Aug  4 23:09:00.510: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.605020849s
Aug  4 23:09:02.609: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.704634777s
Aug  4 23:09:04.709: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.804146332s
Aug  4 23:09:06.809: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.903896332s
STEP: Saw pod success
Aug  4 23:09:06.809: INFO: Pod "pod-subpath-test-preprovisionedpv-btcd" satisfied condition "Succeeded or Failed"
Aug  4 23:09:06.908: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-btcd container test-container-volume-preprovisionedpv-btcd: <nil>
STEP: delete the pod
Aug  4 23:09:07.136: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-btcd to disappear
Aug  4 23:09:07.234: INFO: Pod pod-subpath-test-preprovisionedpv-btcd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-btcd
Aug  4 23:09:07.234: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-btcd" in namespace "provisioning-6726"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":56,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:08.703: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:09:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1231" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  4 23:09:09.336: INFO: Waiting up to 5m0s for pod "pod-f0d85cbf-2864-4741-b816-17885ca0e2be" in namespace "emptydir-3801" to be "Succeeded or Failed"
Aug  4 23:09:09.432: INFO: Pod "pod-f0d85cbf-2864-4741-b816-17885ca0e2be": Phase="Pending", Reason="", readiness=false. Elapsed: 96.558004ms
Aug  4 23:09:11.532: INFO: Pod "pod-f0d85cbf-2864-4741-b816-17885ca0e2be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195845554s
STEP: Saw pod success
Aug  4 23:09:11.532: INFO: Pod "pod-f0d85cbf-2864-4741-b816-17885ca0e2be" satisfied condition "Succeeded or Failed"
Aug  4 23:09:11.630: INFO: Trying to get logs from node ip-172-20-45-94.eu-west-2.compute.internal pod pod-f0d85cbf-2864-4741-b816-17885ca0e2be container test-container: <nil>
STEP: delete the pod
Aug  4 23:09:11.847: INFO: Waiting for pod pod-f0d85cbf-2864-4741-b816-17885ca0e2be to disappear
Aug  4 23:09:11.943: INFO: Pod pod-f0d85cbf-2864-4741-b816-17885ca0e2be no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  4 23:09:11.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3801" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":66,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:12.167: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:13.039: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:14.440: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug  4 23:09:12.740: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b" in namespace "projected-4175" to be "Succeeded or Failed"
Aug  4 23:09:12.839: INFO: Pod "downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b": Phase="Pending", Reason="", readiness=false. Elapsed: 98.869385ms
Aug  4 23:09:14.937: INFO: Pod "downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197499379s
Aug  4 23:09:17.037: INFO: Pod "downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297025212s
STEP: Saw pod success
Aug  4 23:09:17.037: INFO: Pod "downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b" satisfied condition "Succeeded or Failed"
Aug  4 23:09:17.136: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b container client-container: <nil>
STEP: delete the pod
Aug  4 23:09:17.352: INFO: Waiting for pod downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b to disappear
Aug  4 23:09:17.450: INFO: Pod downwardapi-volume-25487da8-31f1-4579-861a-6d415f4acf0b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.513 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:17.662: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 67 lines ...
• [SLOW TEST:21.066 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:18.309: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 115 lines ...
Aug  4 23:08:53.740: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:08:54.405: INFO: Exec stderr: ""
Aug  4 23:08:56.696: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-1777"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-1777"/host; echo host > "/var/lib/kubelet/mount-propagation-1777"/host/file] Namespace:mount-propagation-1777 PodName:hostexec-ip-172-20-61-222.eu-west-2.compute.internal-z77n2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug  4 23:08:56.696: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:08:57.459: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1777 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:08:57.459: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:08:58.129: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:08:58.227: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1777 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:08:58.227: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:08:58.906: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:08:59.002: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1777 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:08:59.002: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:08:59.686: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:08:59.783: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1777 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:08:59.783: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:00.442: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Aug  4 23:09:00.539: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1777 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:00.540: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:01.222: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:01.328: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1777 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:01.328: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:02.003: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Aug  4 23:09:02.099: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1777 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:02.100: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:02.758: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:02.855: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1777 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:02.855: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:03.689: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:03.796: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1777 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:03.796: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:04.469: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:04.565: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1777 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:04.565: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:05.251: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Aug  4 23:09:05.350: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1777 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:05.350: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:06.029: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Aug  4 23:09:06.126: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1777 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:06.126: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:06.814: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Aug  4 23:09:06.910: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1777 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:06.910: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:07.645: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:07.743: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1777 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:07.743: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:08.424: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:08.539: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1777 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:08.539: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:09.232: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Aug  4 23:09:09.328: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1777 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:09.328: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:09.992: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:10.089: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1777 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:10.089: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:10.735: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:10.832: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1777 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:10.832: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:11.484: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Aug  4 23:09:11.581: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1777 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:11.581: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:12.231: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:12.329: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1777 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug  4 23:09:12.329: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:12.987: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Aug  4 23:09:12.987: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-1777"/master/file` = master] Namespace:mount-propagation-1777 PodName:hostexec-ip-172-20-61-222.eu-west-2.compute.internal-z77n2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug  4 23:09:12.987: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:13.643: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-1777"/slave/file] Namespace:mount-propagation-1777 PodName:hostexec-ip-172-20-61-222.eu-west-2.compute.internal-z77n2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug  4 23:09:13.643: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 23:09:14.303: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-1777"/host] Namespace:mount-propagation-1777 PodName:hostexec-ip-172-20-61-222.eu-west-2.compute.internal-z77n2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug  4 23:09:14.303: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:52.267 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":46,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:19.128: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:08:33.219: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Aug  4 23:08:33.706: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-23758jhm5
STEP: creating a claim
Aug  4 23:08:33.804: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-jptq
STEP: Creating a pod to test subpath
Aug  4 23:08:34.103: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jptq" in namespace "provisioning-2375" to be "Succeeded or Failed"
Aug  4 23:08:34.200: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 97.036665ms
Aug  4 23:08:36.297: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194587715s
Aug  4 23:08:38.395: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291958324s
Aug  4 23:08:40.493: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390396112s
Aug  4 23:08:42.592: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488942397s
Aug  4 23:08:44.690: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.587103479s
... skipping 5 lines ...
Aug  4 23:08:57.282: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 23.179779648s
Aug  4 23:08:59.381: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 25.278291955s
Aug  4 23:09:01.479: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 27.375990469s
Aug  4 23:09:03.577: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Pending", Reason="", readiness=false. Elapsed: 29.474748939s
Aug  4 23:09:05.690: INFO: Pod "pod-subpath-test-dynamicpv-jptq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.587667474s
STEP: Saw pod success
Aug  4 23:09:05.690: INFO: Pod "pod-subpath-test-dynamicpv-jptq" satisfied condition "Succeeded or Failed"
Aug  4 23:09:05.787: INFO: Trying to get logs from node ip-172-20-63-4.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-jptq container test-container-subpath-dynamicpv-jptq: <nil>
STEP: delete the pod
Aug  4 23:09:05.992: INFO: Waiting for pod pod-subpath-test-dynamicpv-jptq to disappear
Aug  4 23:09:06.089: INFO: Pod pod-subpath-test-dynamicpv-jptq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jptq
Aug  4 23:09:06.089: INFO: Deleting pod "pod-subpath-test-dynamicpv-jptq" in namespace "provisioning-2375"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:22.304: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 141 lines ...
Aug  4 23:09:17.475: INFO: PersistentVolumeClaim pvc-k7sfx found but phase is Pending instead of Bound.
Aug  4 23:09:19.573: INFO: PersistentVolumeClaim pvc-k7sfx found and phase=Bound (8.489907587s)
Aug  4 23:09:19.573: INFO: Waiting up to 3m0s for PersistentVolume local-g6c5s to have phase Bound
Aug  4 23:09:19.670: INFO: PersistentVolume local-g6c5s found and phase=Bound (97.030333ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dm52
STEP: Creating a pod to test subpath
Aug  4 23:09:19.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dm52" in namespace "provisioning-1296" to be "Succeeded or Failed"
Aug  4 23:09:20.076: INFO: Pod "pod-subpath-test-preprovisionedpv-dm52": Phase="Pending", Reason="", readiness=false. Elapsed: 113.257223ms
Aug  4 23:09:22.174: INFO: Pod "pod-subpath-test-preprovisionedpv-dm52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211195067s
Aug  4 23:09:24.272: INFO: Pod "pod-subpath-test-preprovisionedpv-dm52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309784702s
STEP: Saw pod success
Aug  4 23:09:24.272: INFO: Pod "pod-subpath-test-preprovisionedpv-dm52" satisfied condition "Succeeded or Failed"
Aug  4 23:09:24.370: INFO: Trying to get logs from node ip-172-20-45-94.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-dm52 container test-container-subpath-preprovisionedpv-dm52: <nil>
STEP: delete the pod
Aug  4 23:09:24.573: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dm52 to disappear
Aug  4 23:09:24.671: INFO: Pod pod-subpath-test-preprovisionedpv-dm52 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dm52
Aug  4 23:09:24.671: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dm52" in namespace "provisioning-1296"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:26.058: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 171 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  4 23:09:27.627: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
Aug  4 23:08:23.020: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  4 23:08:23.123: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathpt6hw] to have phase Bound
Aug  4 23:08:23.221: INFO: PersistentVolumeClaim csi-hostpathpt6hw found but phase is Pending instead of Bound.
Aug  4 23:08:25.321: INFO: PersistentVolumeClaim csi-hostpathpt6hw found and phase=Bound (2.197769931s)
STEP: Creating pod pod-subpath-test-dynamicpv-mtsn
STEP: Creating a pod to test subpath
Aug  4 23:08:25.616: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mtsn" in namespace "provisioning-7340" to be "Succeeded or Failed"
Aug  4 23:08:25.713: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 96.76919ms
Aug  4 23:08:27.811: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195296204s
Aug  4 23:08:29.909: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293591699s
Aug  4 23:08:32.007: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391046561s
Aug  4 23:08:34.110: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493624709s
Aug  4 23:08:36.207: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591476582s
Aug  4 23:08:38.305: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.688995429s
Aug  4 23:08:40.403: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.787133984s
Aug  4 23:08:42.501: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.884998885s
STEP: Saw pod success
Aug  4 23:08:42.501: INFO: Pod "pod-subpath-test-dynamicpv-mtsn" satisfied condition "Succeeded or Failed"
Aug  4 23:08:42.598: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-mtsn container test-container-subpath-dynamicpv-mtsn: <nil>
STEP: delete the pod
Aug  4 23:08:42.806: INFO: Waiting for pod pod-subpath-test-dynamicpv-mtsn to disappear
Aug  4 23:08:42.909: INFO: Pod pod-subpath-test-dynamicpv-mtsn no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mtsn
Aug  4 23:08:42.909: INFO: Deleting pod "pod-subpath-test-dynamicpv-mtsn" in namespace "provisioning-7340"
STEP: Creating pod pod-subpath-test-dynamicpv-mtsn
STEP: Creating a pod to test subpath
Aug  4 23:08:43.108: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mtsn" in namespace "provisioning-7340" to be "Succeeded or Failed"
Aug  4 23:08:43.205: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 96.943344ms
Aug  4 23:08:45.302: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194679304s
Aug  4 23:08:47.400: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292753431s
Aug  4 23:08:49.500: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392369739s
Aug  4 23:08:51.598: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490671313s
Aug  4 23:08:53.715: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.607056261s
Aug  4 23:08:55.813: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.705462155s
Aug  4 23:08:57.912: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.803958526s
Aug  4 23:09:00.009: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.901074304s
Aug  4 23:09:02.108: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.000341613s
Aug  4 23:09:04.206: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Pending", Reason="", readiness=false. Elapsed: 21.098321065s
Aug  4 23:09:06.309: INFO: Pod "pod-subpath-test-dynamicpv-mtsn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.201034458s
STEP: Saw pod success
Aug  4 23:09:06.309: INFO: Pod "pod-subpath-test-dynamicpv-mtsn" satisfied condition "Succeeded or Failed"
Aug  4 23:09:06.406: INFO: Trying to get logs from node ip-172-20-46-233.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-mtsn container test-container-subpath-dynamicpv-mtsn: <nil>
STEP: delete the pod
Aug  4 23:09:06.612: INFO: Waiting for pod pod-subpath-test-dynamicpv-mtsn to disappear
Aug  4 23:09:06.710: INFO: Pod pod-subpath-test-dynamicpv-mtsn no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mtsn
Aug  4 23:09:06.711: INFO: Deleting pod "pod-subpath-test-dynamicpv-mtsn" in namespace "provisioning-7340"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":5,"skipped":75,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  4 23:09:24.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Aug  4 23:09:25.054: INFO: Running '/tmp/kubectl57887569/kubectl --server=https://api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-376 cluster-info dump'
Aug  4 23:09:29.188: INFO: stderr: ""
Aug  4 23:09:29.189: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6106\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"uid\": \"32db9293-751e-4d35-86aa-69cd645b7231\",\n                \"resourceVersion\": \"5991\",\n                \"creationTimestamp\": \"2021-08-04T23:05:00Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-45-94.eu-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-45-94.eu-west-2.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"eu-west-2\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-2a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-4978\\\":\\\"ip-172-20-45-94.eu-west-2.compute.internal\\\",\\\"csi-hostpath-volume-1312\\\":\\\"ip-172-20-45-94.eu-west-2.compute.internal\\\",\\\"csi-hostpath-volume-expand-3921\\\":\\\"ip-172-20-45-94.eu-west-2.compute.internal\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-2a/i-0a9546c770069db7c\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49475200Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3989324Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45596344245\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3886924Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:05:01Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:01Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:10Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.45.94\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.170.74.80\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-170-74-80.eu-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec204f15595817b2afce5e4166523886\",\n                    \"systemUUID\": \"EC204F15-5958-17B2-AFCE-5E4166523886\",\n                    \"bootID\": \"7fd70c03-f903-469f-8df6-a5978be97a84\",\n                    \"kernelVersion\": \"4.9.0-16-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 9 (stretch)\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.3\",\n                    \"kubeProxyVersion\": \"v1.21.3\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 103317641\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0\"\n                        ],\n                        \"sizeBytes\": 48281550\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:26e327b018c21a49523b759d7787e99553181ae9ef90b6bdc13abe362a43ced0\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2\"\n                        ],\n                        \"sizeBytes\": 47823451\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 46131354\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 46041582\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 40678121\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 27762720\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 16322467\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:f8cec70adc74897ddde5da4f1da0209a497370eaf657566e2b36bc5f0f3ccbd7\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 14967303\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-hostpath-ephemeral-4978^dbf1bf11-f578-11eb-8c15-7e9fddfc08cd\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-ephemeral-4978^dbf1bf11-f578-11eb-8c15-7e9fddfc08cd\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-46-233.eu-west-2.compute.internal\",\n                \"uid\": \"1bccc413-dc9e-4fd8-8961-5944b9272f68\",\n                \"resourceVersion\": \"6000\",\n                \"creationTimestamp\": \"2021-08-04T23:05:00Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-2a\",\n                    \"io.kubernetes.storage.mock/node\": \"some-mock-node\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-46-233.eu-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-46-233.eu-west-2.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"eu-west-2\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-2a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-258\\\":\\\"ip-172-20-46-233.eu-west-2.compute.internal\\\",\\\"csi-hostpath-provisioning-6364\\\":\\\"ip-172-20-46-233.eu-west-2.compute.internal\\\",\\\"csi-hostpath-provisioning-7340\\\":\\\"ip-172-20-46-233.eu-west-2.compute.internal\\\",\\\"csi-hostpath-volume-expand-1082\\\":\\\"ip-172-20-46-233.eu-west-2.compute.internal\\\",\\\"csi-hostpath-volume-expand-8516\\\":\\\"ip-172-20-46-233.eu-west-2.compute.internal\\\",\\\"csi-mock-csi-mock-volumes-5185\\\":\\\"csi-mock-csi-mock-volumes-5185\\\",\\\"csi-mock-csi-mock-volumes-8136\\\":\\\"csi-mock-csi-mock-volumes-8136\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-2a/i-0999b05a81ef71ddf\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49475200Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3989324Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45596344245\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3886924Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:05:01Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:01Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:21Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:10Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.46.233\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.170.38.47\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-46-233.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-46-233.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-170-38-47.eu-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2e65db1ff88dbaf093ca1e2df1dd5b\",\n                    \"systemUUID\": \"EC2E65DB-1FF8-8DBA-F093-CA1E2DF1DD5B\",\n                    \"bootID\": \"3cefdaf8-6ed8-439b-a744-0025afbe2950\",\n                    \"kernelVersion\": \"4.9.0-16-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 9 (stretch)\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.3\",\n                    \"kubeProxyVersion\": \"v1.21.3\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2\",\n                            \"k8s.gcr.io/etcd:3.4.13-0\"\n                        ],\n                        \"sizeBytes\": 253392289\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 103317641\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276\",\n                            \"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4\"\n                        ],\n                        \"sizeBytes\": 58172101\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 51645752\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0\"\n                        ],\n                        \"sizeBytes\": 48281550\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:26e327b018c21a49523b759d7787e99553181ae9ef90b6bdc13abe362a43ced0\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2\"\n                        ],\n                        \"sizeBytes\": 47823451\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 46131354\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 46041582\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 27762720\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 19662887\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 17680993\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 16322467\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:f8cec70adc74897ddde5da4f1da0209a497370eaf657566e2b36bc5f0f3ccbd7\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 14967303\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac\",\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs:1.3\"\n                        ],\n                        \"sizeBytes\": 7107254\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67\",\n                            \"k8s.gcr.io/busybox:latest\"\n                        ],\n                        \"sizeBytes\": 2433303\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5185^4\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5185^4\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"uid\": \"dc016872-b0f4-4115-a214-6310c7337f3a\",\n                \"resourceVersion\": \"5982\",\n                \"creationTimestamp\": \"2021-08-04T23:05:00Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-61-222.eu-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-61-222.eu-west-2.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"eu-west-2\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-2a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-provisioning-9738\\\":\\\"ip-172-20-61-222.eu-west-2.compute.internal\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-2a/i-04f67f418ea3cf920\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49475200Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3989332Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45596344245\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3886932Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:05:01Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:01Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:20Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:20Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:20Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:20Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:10Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.61.222\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.130.75.27\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-130-75-27.eu-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec22dd54a3b9472ae79161f4282661ab\",\n                    \"systemUUID\": \"EC22DD54-A3B9-472A-E791-61F4282661AB\",\n                    \"bootID\": \"37566d6b-0a6f-47fb-8d4b-dd4f36451d1c\",\n                    \"kernelVersion\": \"4.9.0-16-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 9 (stretch)\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.3\",\n                    \"kubeProxyVersion\": \"v1.21.3\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 253371792\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 103317641\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6\",\n                            \"k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5\"\n                        ],\n                        \"sizeBytes\": 60182158\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0\"\n                        ],\n                        \"sizeBytes\": 48281550\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:26e327b018c21a49523b759d7787e99553181ae9ef90b6bdc13abe362a43ced0\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2\"\n                        ],\n                        \"sizeBytes\": 47823451\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 47554275\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 46131354\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 46041582\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 27762720\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 16322467\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:f8cec70adc74897ddde5da4f1da0209a497370eaf657566e2b36bc5f0f3ccbd7\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 14967303\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                \"resourceVersion\": \"4225\",\n                \"creationTimestamp\": \"2021-08-04T23:03:17Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"master-eu-west-2a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"eu-west-2\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-2a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-2a/i-0d3e140994a889fb4\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49475200Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3833680Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45596344245\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3731280Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:03:40Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:40Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:08:38Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:12Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:08:38Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:12Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:08:38Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:12Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:08:38Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.63.249\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.177.51.115\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-177-51-115.eu-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec26f7f8a1b6ff3f7647c086e89ccefd\",\n                    \"systemUUID\": \"EC26F7F8-A1B6-FF3F-7647-C086E89CCEFD\",\n                    \"bootID\": \"3721f2f0-fb16-43d5-8502-102aaeab6730\",\n                    \"kernelVersion\": \"4.9.0-16-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 9 (stretch)\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.3\",\n                    \"kubeProxyVersion\": \"v1.21.3\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\"\n                        ],\n                        \"sizeBytes\": 492748624\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 125624733\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 119833526\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.21.0\"\n                        ],\n                        \"sizeBytes\": 112242860\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.21.0\"\n                        ],\n                        \"sizeBytes\": 110445448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b\",\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 103317641\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 50639773\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0\"\n                        ],\n                        \"sizeBytes\": 24015926\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"uid\": \"df5a8ae9-9ccf-4a2d-b0aa-fec4c12553bd\",\n                \"resourceVersion\": \"6055\",\n                \"creationTimestamp\": \"2021-08-04T23:05:00Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"eu-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"eu-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-eu-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-63-4.eu-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"eu-west-2\",\n                    \"topology.kubernetes.io/zone\": \"eu-west-2a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///eu-west-2a/i-0089001f9156536c3\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49475200Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3989324Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45596344245\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3886924Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:05:01Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:01Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:11Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:11Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:11Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:00Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-04T23:09:11Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:10Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.63.4\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"18.132.37.241\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-18-132-37-241.eu-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2422994cc510800f9289d0385b5a54\",\n                    \"systemUUID\": \"EC242299-4CC5-1080-0F92-89D0385B5A54\",\n                    \"bootID\": \"02019c88-1047-426b-81c0-60df9ae3d8ff\",\n                    \"kernelVersion\": \"4.9.0-16-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 9 (stretch)\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.3\",\n                    \"kubeProxyVersion\": \"v1.21.3\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\"\n                        ],\n                        \"sizeBytes\": 103317641\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 51645752\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 47554275\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 19662887\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 17680993\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 682696\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2381\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983ce3d8916f33\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0ef22412-3c62-44ed-be01-c73dbb70c572\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"422\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:51Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983cf80eb4c84b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ce09bd57-25ce-4f9e-b729-043671bba550\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-08-04T23:05:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"430\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:01Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983cfa9e645d77\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"04d2ba77-c156-4403-8d8d-2ad60fbfeb99\",\n                \"resourceVersion\": \"122\",\n                \"creationTimestamp\": \"2021-08-04T23:05:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"686\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-7bjbh to ip-172-20-63-4.eu-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:12Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983cfadd8c24f2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"be961353-5159-432a-818c-65c38c68b042\",\n                \"resourceVersion\": \"158\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"730\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:13Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983cfb327c48a0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"92df0a9a-92f2-4700-b99a-071ee23eb401\",\n                \"resourceVersion\": \"170\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"730\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 1.425011189s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983cfb35a6e695\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ecce521f-2338-4c6a-85ef-89b75d7f0bc5\",\n                \"resourceVersion\": \"171\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"730\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh.16983cfb3f603676\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9a081b84-adb2-41a0-a91d-3c162e437d4e\",\n                \"resourceVersion\": \"174\",\n                \"creationTimestamp\": \"2021-08-04T23:05:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"730\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:15Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-gl2zd.16983cfb12c40001\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"04032c6a-a6cd-49a0-9cfe-31f6c899acec\",\n                \"resourceVersion\": \"163\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-gl2zd\",\n                \"uid\": \"7fab2781-f172-4db0-be70-583d379e7d86\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"744\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-gl2zd to ip-172-20-61-222.eu-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-gl2zd.16983cfb8238a5cd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f08be6b-6beb-47e9-bc11-a0e3732e7bda\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-08-04T23:05:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-gl2zd\",\n                \"uid\": \"7fab2781-f172-4db0-be70-583d379e7d86\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"747\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:16Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-gl2zd.16983cfbd7c6c4c5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1537f080-42f0-4fdc-95fe-958df95bf394\",\n                \"resourceVersion\": \"181\",\n                \"creationTimestamp\": \"2021-08-04T23:05:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-gl2zd\",\n                \"uid\": \"7fab2781-f172-4db0-be70-583d379e7d86\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"747\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 1.435362505s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:17Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-gl2zd.16983cfbdaf2f30f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"81ce20d7-0105-47d0-a3d4-4a59679514da\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-08-04T23:05:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-gl2zd\",\n                \"uid\": \"7fab2781-f172-4db0-be70-583d379e7d86\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"747\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:17Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-gl2zd.16983cfbe2d28158\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"89113677-6a5d-4225-acc3-265bc462d481\",\n                \"resourceVersion\": \"183\",\n                \"creationTimestamp\": \"2021-08-04T23:05:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-gl2zd\",\n                \"uid\": \"7fab2781-f172-4db0-be70-583d379e7d86\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"747\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:17Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16983ce3d6fae377\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"28953d15-11dd-4292-807c-7d845729ae67\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"9ed368ba-f7cb-4a52-ba70-175185db5665\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"414\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-7bjbh\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16983cfb11b111df\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0bad9e6a-d384-46e6-b06e-cc8c51d3eec6\",\n                \"resourceVersion\": \"162\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"9ed368ba-f7cb-4a52-ba70-175185db5665\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"743\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-gl2zd\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983ce3d7359256\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"27cd2505-96c4-4a46-aeab-2c404aa872e8\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"421\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:51Z\",\n            \"count\": 5,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983cf80e4733a2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d2476130-2883-462b-a78d-a2a0730309c5\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-08-04T23:05:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"426\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:01Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983cfa62d0918c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6b47551a-5fd8-4dd4-8a62-9b987f3e6a73\",\n                \"resourceVersion\": \"102\",\n                \"creationTimestamp\": \"2021-08-04T23:05:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"684\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-84d4cfd89c-2g6cx to ip-172-20-45-94.eu-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:11Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983cfa99a2d2a2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9a9764f9-d95f-43c4-82a8-4eb58d9baaf2\",\n                \"resourceVersion\": \"212\",\n                \"creationTimestamp\": \"2021-08-04T23:05:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"724\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:12Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983cfaf706bcd8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bbc1a805-8e50-40d9-818b-1ab5f0dc2d4c\",\n                \"resourceVersion\": \"213\",\n                \"creationTimestamp\": \"2021-08-04T23:05:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"724\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\" in 1.566815531s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:13Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983cfafa6002dc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d94db3d4-6fb0-4b11-9e54-43ab22d10f10\",\n                \"resourceVersion\": \"214\",\n                \"creationTimestamp\": \"2021-08-04T23:05:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"724\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:13Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx.16983cfb01f2b782\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8e23d20a-de22-4f2c-ab8a-41fe28555eb0\",\n                \"resourceVersion\": \"215\",\n                \"creationTimestamp\": \"2021-08-04T23:05:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"724\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c.16983ce3d8ee677d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c7ee38ec-3eef-4d8c-a99a-57b5287ea299\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"uid\": \"12a96db0-1c3d-4fb0-bcf0-c858872ac43a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"411\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-84d4cfd89c-2g6cx\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16983ce3cef0033a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bffd6b7d-216b-4646-a888-e748752770ed\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"252873ba-9a05-471c-b5cc-6199b852dda1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"354\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-84d4cfd89c to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16983ce3d1c37dbb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f42eb0de-e1dd-438d-83be-1866560d59df\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"821b3f47-5162-43e5-a9bd-5308a0584fa1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"342\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16983cfb113165c8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eacf3a60-b633-4e68-a684-adaea8e0f91b\",\n                \"resourceVersion\": \"161\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"821b3f47-5162-43e5-a9bd-5308a0584fa1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"742\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"lastTimestamp\": \"2021-08-04T23:05:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb-jv7hm.16983ce3d7f33509\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e6c63311-3c0d-47f9-bfb7-7faee43e71b6\",\n                \"resourceVersion\": \"55\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-7f4474bbb-jv7hm\",\n                \"uid\": \"d966147d-f50f-408d-bdab-64b6df29854e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"419\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-7f4474bbb-jv7hm to ip-172-20-63-249.eu-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb-jv7hm.16983ce4006d386d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b705e9cc-0092-43e9-a573-f966d9f1b5a4\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-08-04T23:03:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-7f4474bbb-jv7hm\",\n                \"uid\": \"d966147d-f50f-408d-bdab-64b6df29854e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.21.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb-jv7hm.16983ce403649010\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2f717047-5218-43ee-9cf9-97068f62dde8\",\n                \"resourceVersion\": \"63\",\n                \"creationTimestamp\": \"2021-08-04T23:03:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-7f4474bbb-jv7hm\",\n                \"uid\": \"d966147d-f50f-408d-bdab-64b6df29854e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb-jv7hm.16983ce40ba0af2d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95487453-cbf9-4a24-ac36-923f75e35ef7\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-08-04T23:03:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-7f4474bbb-jv7hm\",\n                \"uid\": \"d966147d-f50f-408d-bdab-64b6df29854e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb.16983ce3d8fc647f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"04fab3af-8b5e-456f-8cd3-4b4b21b0e331\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-7f4474bbb\",\n                \"uid\": \"1c596903-5a43-4187-82e7-8c0e58b2275b\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"417\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-7f4474bbb-jv7hm\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16983ce3d412e371\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"063c0668-b19b-4202-8c80-5724b579fa7c\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"2b31b817-dff2-440b-ba48-4faf150acc7e\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"362\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-7f4474bbb to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal.16983cd7e4cc96cb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5f7905c1-6754-4b13-84c7-7c9fe1ebba00\",\n                \"resourceVersion\": \"22\",\n                \"creationTimestamp\": \"2021-08-04T23:03:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"24c3d6b1be4753071b3cacc8d805cfad\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal.16983cda486d7642\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d305e05e-8ba3-4ca6-8ed1-dd1b7a0fe5ac\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-08-04T23:03:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"24c3d6b1be4753071b3cacc8d805cfad\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 10.261391807s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal.16983cda4b76b696\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"27528027-39f9-4f98-b937-41325c4ac479\",\n                \"resourceVersion\": \"42\",\n                \"creationTimestamp\": \"2021-08-04T23:03:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"24c3d6b1be4753071b3cacc8d805cfad\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal.16983cda5126b8b0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"22965f89-4034-48ab-8bfb-a3ddcbdd5e88\",\n                \"resourceVersion\": \"43\",\n                \"creationTimestamp\": \"2021-08-04T23:03:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"24c3d6b1be4753071b3cacc8d805cfad\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal.16983cd7d3c40aa8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"81d6a095-e9aa-4d8c-9b54-c38708597b11\",\n                \"resourceVersion\": \"20\",\n                \"creationTimestamp\": \"2021-08-04T23:03:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"95303ad9ca09f0b0624a98bb1c3d5670\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:42Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal.16983cda14ec0f25\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"64b0e79f-8005-4a6c-aac3-92e0541782d3\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"95303ad9ca09f0b0624a98bb1c3d5670\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 9.683060456s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:52Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal.16983cda17fd9942\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ac00d7ff-339e-4702-8899-a3b65ff2d7ec\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-08-04T23:03:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"95303ad9ca09f0b0624a98bb1c3d5670\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:52Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal.16983cda1d738622\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"aa971c38-3f06-4201-873a-07976d8ce5a3\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-08-04T23:03:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"95303ad9ca09f0b0624a98bb1c3d5670\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:52Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-9n9bw.16983ce3ce357dd3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"43426d60-9fac-4471-863e-4bb8ee01dcce\",\n                \"resourceVersion\": \"49\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-9n9bw\",\n                \"uid\": \"790991d3-5c26-4686-95c6-f38133e4a322\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"405\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-9n9bw to ip-172-20-63-249.eu-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-9n9bw.16983ce3db17e04a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f7e38885-e3b4-4c1b-9009-de19bfa94c78\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-9n9bw\",\n                \"uid\": \"790991d3-5c26-4686-95c6-f38133e4a322\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"409\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"kube-api-access-rv26j\\\" : configmap \\\"kube-root-ca.crt\\\" not found\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-9n9bw.16983ce41b4a76f4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a3f5f657-5619-476f-b3ea-7b1410d80024\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-08-04T23:03:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-9n9bw\",\n                \"uid\": \"790991d3-5c26-4686-95c6-f38133e4a322\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"409\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.21.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-9n9bw.16983ce41e96019d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"751c40d8-83b7-419c-bb6e-84414c60c4e9\",\n                \"resourceVersion\": \"66\",\n                \"creationTimestamp\": \"2021-08-04T23:03:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-9n9bw\",\n                \"uid\": \"790991d3-5c26-4686-95c6-f38133e4a322\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"409\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-9n9bw.16983ce4268c2b3d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"759cf7bc-c97b-4e96-8126-cebcc2814ce9\",\n                \"resourceVersion\": \"67\",\n                \"creationTimestamp\": \"2021-08-04T23:03:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-9n9bw\",\n                \"uid\": \"790991d3-5c26-4686-95c6-f38133e4a322\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"409\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16983ce44bc808fa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4018ae55-38eb-4914-bf15-111b3b074c9c\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2021-08-04T23:03:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"baf551b0-ba12-4e41-9fca-6bc58e5ca630\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"455\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-63-249_f165e852-c23f-47ba-874b-39777b0799eb became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-63-249_f165e852-c23f-47ba-874b-39777b0799eb\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:36Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16983ce3cceb6b28\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0c0301b7-70f2-421f-b263-721404c8e0d4\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"a6af55ba-600d-47e8-a35c-7f58de188170\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"400\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-9n9bw\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal.16983cd811664e53\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"05327159-9a94-4864-841b-b2790cb814ea\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-08-04T23:03:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:06Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal.16983cd83b534179\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6eb61bc9-969a-4bfa-8869-487baf81558c\",\n                \"resourceVersion\": \"45\",\n                \"creationTimestamp\": \"2021-08-04T23:03:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:06Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal.16983cd84b049dd5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fc754e35-757d-43bc-a665-cafd53e6ae47\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-08-04T23:03:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:06Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal.16983cd84b498da5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e03dbd05-a309-4db7-9945-8c2d54fcf9c9\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal.16983cd857daaed4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"57fdd327-3ef2-442f-b2f3-47dd4d5c2821\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:45Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal.16983cd873675398\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f37c4968-5085-43f4-aabf-b348f2e3480c\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:45Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal.16983cd80a7ef490\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c4e3314e-ee7d-4a86-b6ae-1ab9074358e4\",\n                \"resourceVersion\": \"23\",\n                \"creationTimestamp\": \"2021-08-04T23:03:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"05c48c2ac9fe4ee3ea588b218e90188b\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal.16983cd83b1aa132\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d792dacb-775d-4dd9-8555-09aeaf1b0fe9\",\n                \"resourceVersion\": \"26\",\n                \"creationTimestamp\": \"2021-08-04T23:03:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"05c48c2ac9fe4ee3ea588b218e90188b\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal.16983cd8481260be\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c73b7cc8-84ec-4bd7-97cd-5d056d755eb9\",\n                \"resourceVersion\": \"29\",\n                \"creationTimestamp\": \"2021-08-04T23:03:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"05c48c2ac9fe4ee3ea588b218e90188b\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16983ce0526cd53a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fa8cca95-6cd9-472c-bfbf-b25e8faed92c\",\n                \"resourceVersion\": \"6\",\n                \"creationTimestamp\": \"2021-08-04T23:03:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"bca35568-a75f-4afd-a835-cf10044b53f9\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"214\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-63-249_598946db-cb77-4097-883a-ae18c6b20bbf became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:19Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.16983ce3ce72175d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2c4f5573-087d-47f8-8eeb-d7a9dae01a88\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"eadaa3f1-39ff-452f-be8a-a97e2618bc81\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"346\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:34Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal.16983cea0095d2cf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1155d87c-896a-4eb8-8ea6-ba8bc9cd3fe7\",\n                \"resourceVersion\": \"198\",\n                \"creationTimestamp\": \"2021-08-04T23:05:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"uid\": \"d34738fcc77cfaa58eb30a19e398fcf0\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal.16983cea0413c005\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d515f6bc-f877-4619-8a06-5a08bcf7ba10\",\n                \"resourceVersion\": \"199\",\n                \"creationTimestamp\": \"2021-08-04T23:05:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"uid\": \"d34738fcc77cfaa58eb30a19e398fcf0\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal.16983cea0b1cc320\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"758c9374-de0d-433c-87fc-034bace7154a\",\n                \"resourceVersion\": \"200\",\n                \"creationTimestamp\": \"2021-08-04T23:05:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"uid\": \"d34738fcc77cfaa58eb30a19e398fcf0\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-45-94.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal.16983cea01d85fbd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2314824d-0a20-403d-860e-14f4e4284f10\",\n                \"resourceVersion\": \"148\",\n                \"creationTimestamp\": \"2021-08-04T23:05:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal\",\n                \"uid\": \"7c87edd980d402a0abd94fa06d0ee773\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-233.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal.16983cea05d45af8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"705b576c-0781-4a9a-bc08-575f38aa25fb\",\n                \"resourceVersion\": \"151\",\n                \"creationTimestamp\": \"2021-08-04T23:05:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal\",\n                \"uid\": \"7c87edd980d402a0abd94fa06d0ee773\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-233.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal.16983cea0cfd3d4c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d6ec66d5-a946-4b83-b0f2-5a1384762fba\",\n                \"resourceVersion\": \"154\",\n                \"creationTimestamp\": \"2021-08-04T23:05:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal\",\n                \"uid\": \"7c87edd980d402a0abd94fa06d0ee773\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-233.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal.16983cea084b84f8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f7b9f0e1-5665-4127-a460-6bad2e29bdf2\",\n                \"resourceVersion\": \"123\",\n                \"creationTimestamp\": \"2021-08-04T23:05:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"uid\": \"269817a607141fd48c3ece70794b26aa\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal.16983cea0c35815d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1686400-bc9d-44d3-a650-e82d102eb637\",\n                \"resourceVersion\": \"135\",\n                \"creationTimestamp\": \"2021-08-04T23:05:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"uid\": \"269817a607141fd48c3ece70794b26aa\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal.16983cea14e824fc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"858f1022-b417-4a9d-980d-d731b8aee34b\",\n                \"resourceVersion\": \"138\",\n                \"creationTimestamp\": \"2021-08-04T23:05:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"uid\": \"269817a607141fd48c3ece70794b26aa\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-222.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal.16983cd7d6ca93fd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ff0814cb-5f41-4806-bd9a-92e9e4c7d2a8\",\n                \"resourceVersion\": \"21\",\n                \"creationTimestamp\": \"2021-08-04T23:03:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"c694321fe7e7d4a6a225c5ca1b1782fe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal.16983cda357ad876\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6a40f5b6-74f4-45d6-a4fc-8decbca7aacd\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-08-04T23:03:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"c694321fe7e7d4a6a225c5ca1b1782fe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.3\\\" in 10.178520541s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal.16983cda38679cea\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"acc95198-ca79-433a-883c-b3c8b59625a0\",\n                \"resourceVersion\": \"39\",\n                \"creationTimestamp\": \"2021-08-04T23:03:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"c694321fe7e7d4a6a225c5ca1b1782fe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal.16983cda3ded2f90\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a5fc9361-c28b-4b9a-9f32-77af3d15fa2e\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2021-08-04T23:03:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"c694321fe7e7d4a6a225c5ca1b1782fe\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal.16983cea128f692b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"254303ba-cf2d-43cb-9651-d585632cd9eb\",\n                \"resourceVersion\": \"101\",\n                \"creationTimestamp\": \"2021-08-04T23:05:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"uid\": \"a445c4dcd73240c0b1e1cd43ddb2411a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal.16983cea16a0829d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18882174-a037-4a9b-9977-9238ba071527\",\n                \"resourceVersion\": \"113\",\n                \"creationTimestamp\": \"2021-08-04T23:05:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"uid\": \"a445c4dcd73240c0b1e1cd43ddb2411a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal.16983cea20a2ff4e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1bd624de-668a-472b-ae89-3e7b6c4f5172\",\n                \"resourceVersion\": \"115\",\n                \"creationTimestamp\": \"2021-08-04T23:05:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"uid\": \"a445c4dcd73240c0b1e1cd43ddb2411a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-4.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"lastTimestamp\": \"2021-08-04T23:04:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal.16983cd815ab47a0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d8ea9d1d-6775-48c0-9cf2-522e7f70999e\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-08-04T23:03:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"76009e83dc69bb11fe76a12059a2f8ec\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal.16983cd83d1b9670\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9359753c-1bf3-4bd1-8447-b72e99d8a6e9\",\n                \"resourceVersion\": \"28\",\n                \"creationTimestamp\": \"2021-08-04T23:03:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"76009e83dc69bb11fe76a12059a2f8ec\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal.16983cd85b92fc98\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d0350fcf-8cb7-49a4-91b3-fe2ac5c691d8\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"uid\": \"76009e83dc69bb11fe76a12059a2f8ec\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-63-249.eu-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:02:45Z\",\n            \"lastTimestamp\": \"2021-08-04T23:02:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16983ce0c26f29e7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"753e2386-00ac-4214-9d33-fed2c0ed370f\",\n                \"resourceVersion\": \"17\",\n                \"creationTimestamp\": \"2021-08-04T23:03:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"b001f8dd-03d0-430f-83c4-b3e0cee5f66d\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"259\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-63-249_785aa5cf-2c66-4f99-9c0c-a6e96ed7c486 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-04T23:03:21Z\",\n            \"lastTimestamp\": \"2021-08-04T23:03:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6112\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6114\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5bb5f8e1-1133-452c-b216-2d0fd1a840d7\",\n                \"resourceVersion\": \"344\",\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6114\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a6af55ba-600d-47e8-a35c-7f58de188170\",\n                \"resourceVersion\": \"451\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-04T23:03:23Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.21.0\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.21.0\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.21.0\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.21.0\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.21.0\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.21.0\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.21.0\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6121\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"821b3f47-5162-43e5-a9bd-5308a0584fa1\",\n                \"resourceVersion\": \"782\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-04T23:05:15Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:15Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-04T23:05:18Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-5dc785954d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"252873ba-9a05-471c-b5cc-6199b852dda1\",\n                \"resourceVersion\": \"756\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-04T23:03:24Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-04T23:05:14Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:05:14Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-04T23:05:14Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-84d4cfd89c\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2b31b817-dff2-440b-ba48-4faf150acc7e\",\n                \"resourceVersion\": \"454\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-04T23:03:25Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.21.0\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.21.0\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.21.0\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.21.0\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.21.0\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.21.0\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.21.0\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-04T23:03:36Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:36Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-04T23:03:36Z\",\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-7f4474bbb\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6124\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9ed368ba-f7cb-4a52-ba70-175185db5665\",\n                \"resourceVersion\": \"781\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"821b3f47-5162-43e5-a9bd-5308a0584fa1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"5dc785954d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"5dc785954d\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"12a96db0-1c3d-4fb0-bcf0-c858872ac43a\",\n                \"resourceVersion\": \"755\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"252873ba-9a05-471c-b5cc-6199b852dda1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"84d4cfd89c\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"84d4cfd89c\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c596903-5a43-4187-82e7-8c0e58b2275b\",\n                \"resourceVersion\": \"453\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"7f4474bbb\",\n                    \"version\": \"v1.21.0\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"2b31b817-dff2-440b-ba48-4faf150acc7e\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"7f4474bbb\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"7f4474bbb\",\n                            \"version\": \"v1.21.0\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6127\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-7bjbh\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2b3fb07f-0d73-4e16-aff2-0d48c4639760\",\n                \"resourceVersion\": \"762\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"9ed368ba-f7cb-4a52-ba70-175185db5665\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-72b65\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-72b65\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:12Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:12Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.4\",\n                \"podIP\": \"100.96.4.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.4.2\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:05:12Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:05:15Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"docker://034d07cdd6f438b5cec5143b57421befe6dd4c7ab67b381d46012a9f74f1303c\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-gl2zd\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7fab2781-f172-4db0-be70-583d379e7d86\",\n                \"resourceVersion\": \"777\",\n                \"creationTimestamp\": \"2021-08-04T23:05:14Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"9ed368ba-f7cb-4a52-ba70-175185db5665\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-w6rdj\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-w6rdj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:18Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:18Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:14Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.61.222\",\n                \"podIP\": \"100.96.2.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.2.2\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:05:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:05:17Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"docker://011a76b1a4e5f501fbeec8fc653dde4cd44cc2694236bf3b9465a64ccb27e9d9\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-2g6cx\",\n                \"generateName\": \"coredns-autoscaler-84d4cfd89c-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b67ab2fa-6619-47f2-b077-a79e2689ff4f\",\n                \"resourceVersion\": \"754\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                        \"uid\": \"12a96db0-1c3d-4fb0-bcf0-c858872ac43a\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-84fdd\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-84fdd\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:11Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:14Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:14Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:05:11Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.45.94\",\n                \"podIP\": \"100.96.3.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.3.2\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:05:11Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:05:14Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                        \"containerID\": \"docker://d49e8c834fb498df3c2392f3f2a043067d30922c9a9ae97335347bf7ab5008dc\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-7f4474bbb-jv7hm\",\n                \"generateName\": \"dns-controller-7f4474bbb-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d966147d-f50f-408d-bdab-64b6df29854e\",\n                \"resourceVersion\": \"452\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"7f4474bbb\",\n                    \"version\": \"v1.21.0\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-7f4474bbb\",\n                        \"uid\": \"1c596903-5a43-4187-82e7-8c0e58b2275b\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-xzfmv\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-xzfmv\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:36Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:36Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:03:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:03:35Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.21.0\",\n                        \"imageID\": \"docker://sha256:7f3145a8365d805d8e57215a147a2f8add3ac2c67bccb4dae7e37d163459076e\",\n                        \"containerID\": \"docker://001aa2bb26fae72990e77de790ae808cef7895bafbfcfd1944c5692421d8448b\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b3da5b65-1e2f-41e1-94f8-7eb6b29500be\",\n                \"resourceVersion\": \"572\",\n                \"creationTimestamp\": \"2021-08-04T23:04:22Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"24c3d6b1be4753071b3cacc8d805cfad\",\n                    \"kubernetes.io/config.mirror\": \"24c3d6b1be4753071b3cacc8d805cfad\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:02:32.675932096Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                        \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3997 --insecure=false --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:33Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:53Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:53Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:33Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:02:33Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:02:53Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"docker://871280dfe1b78b293ab712a1e14c662b23ee825820faa94aaa08e97f0981fd58\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"328ead06-0964-4ef1-a40b-7f222d7e41a2\",\n                \"resourceVersion\": \"521\",\n                \"creationTimestamp\": \"2021-08-04T23:03:59Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"95303ad9ca09f0b0624a98bb1c3d5670\",\n                    \"kubernetes.io/config.mirror\": \"95303ad9ca09f0b0624a98bb1c3d5670\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:02:32.675957902Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                        \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3996 --insecure=false --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:33Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:52Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:52Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:33Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:02:33Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:02:52Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"docker://bcdc1adabcf0b1f03a687fc0b2cbe6a294d269eb1b00d254accbac521b5895c8\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-9n9bw\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"790991d3-5c26-4686-95c6-f38133e4a322\",\n                \"resourceVersion\": \"450\",\n                \"creationTimestamp\": \"2021-08-04T23:03:34Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"6499bdfb6f\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.21.0\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"a6af55ba-600d-47e8-a35c-7f58de188170\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-rv26j\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-rv26j\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-63-249.eu-west-2.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:36Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:36Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:03:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:03:35Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.21.0\",\n                        \"imageID\": \"docker://sha256:48f14c58987eeac3f8029fd97b769c4ef8e5f14c6658c39f18bb979354e838c0\",\n                        \"containerID\": \"docker://e38780ad1a72a696d6c2b7fab51002033200a201489e74bd972edd5278c666b7\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"84f36682-70b4-4aa4-a498-5184991663ef\",\n                \"resourceVersion\": \"573\",\n                \"creationTimestamp\": \"2021-08-04T23:04:15Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                    \"kubernetes.io/config.hash\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                    \"kubernetes.io/config.mirror\": \"1bae6f611dea53c74fa24608f1b8d1fb\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:02:32.675959743Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                        \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/kube-apiserver\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:33Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:07Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:07Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:33Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:02:33Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:02:45Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0\",\n                        \"imageID\": \"docker://sha256:6e40345b768016dc24b743fb2ed010c1e6aea8ff0524c9c1fbe54ccf999aaeea\",\n                        \"containerID\": \"docker://38b6d91cf98e5bb115d6151b559026c6e2a124e8fa1f7af0936ceb2e00230c88\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:03:06Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-08-04T23:02:44Z\",\n                                \"finishedAt\": \"2021-08-04T23:03:05Z\",\n                                \"containerID\": \"docker://f68ba2ceda8103eb36f9013a5fd129ecec2d2c7385d42ba00104ea6c08d368f9\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80\",\n                        \"containerID\": \"docker://57ef8820db73be177c8d4aa78a6bb13bbc24a872811836fc2bf5d71f8bea0bfa\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"321fd171-c2e9-48c9-b8c6-5785e031a90b\",\n                \"resourceVersion\": \"546\",\n                \"creationTimestamp\": \"2021-08-04T23:04:09Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"05c48c2ac9fe4ee3ea588b218e90188b\",\n                    \"kubernetes.io/config.mirror\": \"05c48c2ac9fe4ee3ea588b218e90188b\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:02:32.675961336Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                        \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/ca.key\",\n                            \"--configure-cloud-routes=true\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/service-account.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10252,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:46Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:46Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:02:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:02:44Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9\",\n                        \"containerID\": \"docker://0e985a94d478e88e3703407333d4d2e0d32449395ca05627426147a7bbfc2562\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d64efdb5-3ff1-456a-be84-4b4fb2d7d75d\",\n                \"resourceVersion\": \"787\",\n                \"creationTimestamp\": \"2021-08-04T23:05:11Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"d34738fcc77cfaa58eb30a19e398fcf0\",\n                    \"kubernetes.io/config.mirror\": \"d34738fcc77cfaa58eb30a19e398fcf0\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:03:59.355906874Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-45-94.eu-west-2.compute.internal\",\n                        \"uid\": \"32db9293-751e-4d35-86aa-69cd645b7231\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-45-94.eu-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-45-94.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.45.94\",\n                \"podIP\": \"172.20.45.94\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.45.94\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:03:59Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:04:01Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92\",\n                        \"containerID\": \"docker://48ca66a06855eefdd7aadd794e9dba06c60b2eab130d7671c054d88fe91d5c7c\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-46-233.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"48816729-8d08-4b4d-996f-91c4fe94eb40\",\n                \"resourceVersion\": \"786\",\n                \"creationTimestamp\": \"2021-08-04T23:05:12Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"7c87edd980d402a0abd94fa06d0ee773\",\n                    \"kubernetes.io/config.mirror\": \"7c87edd980d402a0abd94fa06d0ee773\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:03:59.216974391Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-46-233.eu-west-2.compute.internal\",\n                        \"uid\": \"1bccc413-dc9e-4fd8-8961-5944b9272f68\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-46-233.eu-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-46-233.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.46.233\",\n                \"podIP\": \"172.20.46.233\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.46.233\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:03:59Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:04:01Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92\",\n                        \"containerID\": \"docker://995956bb6e1db45437ef7531e20585021fdcfe0560ec7470932141a7d0ae5b54\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"96c9868d-94b1-4155-99ba-87ed568bc141\",\n                \"resourceVersion\": \"711\",\n                \"creationTimestamp\": \"2021-08-04T23:05:08Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"269817a607141fd48c3ece70794b26aa\",\n                    \"kubernetes.io/config.mirror\": \"269817a607141fd48c3ece70794b26aa\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:03:59.359633902Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-61-222.eu-west-2.compute.internal\",\n                        \"uid\": \"dc016872-b0f4-4115-a214-6310c7337f3a\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-61-222.eu-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-61-222.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.61.222\",\n                \"podIP\": \"172.20.61.222\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.61.222\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:03:59Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:04:01Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92\",\n                        \"containerID\": \"docker://5ad3f1007f1eb45ca622ed68b6a973e3797f0f3e9f2bec7c93c3d17d96f856ca\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3acf5d3b-68bf-480b-a738-6f730eee0a19\",\n                \"resourceVersion\": \"574\",\n                \"creationTimestamp\": \"2021-08-04T23:04:16Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"c694321fe7e7d4a6a225c5ca1b1782fe\",\n                    \"kubernetes.io/config.mirror\": \"c694321fe7e7d4a6a225c5ca1b1782fe\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:02:32.675962598Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                        \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-63-249.eu-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://127.0.0.1\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:53Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:53Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:02:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:02:53Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"imageID\": \"docker-pullable://k8s.gcr.io/kube-proxy-amd64@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b\",\n                        \"containerID\": \"docker://64a59860fbce5a03377686e536f7a38c41cfd20994800ce19d8e63180f46c5b7\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9b6dc632-582c-478f-bbbd-3d8fd03ef91a\",\n                \"resourceVersion\": \"815\",\n                \"creationTimestamp\": \"2021-08-04T23:05:22Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"a445c4dcd73240c0b1e1cd43ddb2411a\",\n                    \"kubernetes.io/config.mirror\": \"a445c4dcd73240c0b1e1cd43ddb2411a\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:03:59.445785491Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-4.eu-west-2.compute.internal\",\n                        \"uid\": \"df5a8ae9-9ccf-4a2d-b0aa-fec4c12553bd\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-63-4.eu-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-4.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:04:01Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:03:59Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.4\",\n                \"podIP\": \"172.20.63.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.4\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:03:59Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:04:01Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92\",\n                        \"containerID\": \"docker://db511cd1243c18a336129ac2f20aae0211fd39c6e99083eeeca4e68e4df3de6d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"efd5e081-387c-4057-9dcf-cb78e6978fb4\",\n                \"resourceVersion\": \"523\",\n                \"creationTimestamp\": \"2021-08-04T23:04:01Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"76009e83dc69bb11fe76a12059a2f8ec\",\n                    \"kubernetes.io/config.mirror\": \"76009e83dc69bb11fe76a12059a2f8ec\",\n                    \"kubernetes.io/config.seen\": \"2021-08-04T23:02:32.675963986Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                        \"uid\": \"a5ac1df0-0f38-4825-8688-77531fa9af90\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.3\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-63-249.eu-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:46Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:46Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-04T23:02:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.63.249\",\n                \"podIP\": \"172.20.63.249\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.63.249\"\n                    }\n                ],\n                \"startTime\": \"2021-08-04T23:02:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-04T23:02:45Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.3\",\n                        \"imageID\": \"docker://sha256:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a\",\n                        \"containerID\": \"docker://e248ea25dcd58798f3a6424e1c20a99dbed3b381078a0b87991a9cf845099451\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-7bjbh ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-7bjbh ====\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-gl2zd ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-gl2zd ====\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-2g6cx ====\nI0804 23:05:14.052789       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI0804 23:05:14.306958       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI0804 23:05:14.309022       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI0804 23:05:14.309037       1 plugin.go:50] Set control mode to linear\nI0804 23:05:14.309043       1 linear_controller.go:60] ConfigMap version change (old:  new: 741) - rebuilding params\nI0804 23:05:14.309048       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI0804 23:05:14.309109       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI0804 23:05:14.310913       1 k8sclient.go:272] Cluster status: SchedulableNodes[5], SchedulableCores[10]\nI0804 23:05:14.310927       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-2g6cx ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-7f4474bbb-jv7hm ====\nI0804 23:03:35.440510       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI0804 23:03:35.440557       1 main.go:223] Ingress controller disabled\nI0804 23:03:35.440583       1 dnscontroller.go:108] starting DNS controller\nI0804 23:03:35.440638       1 pod.go:60] starting pod controller\nI0804 23:03:35.441194       1 node.go:60] starting node controller\nI0804 23:03:35.445381       1 service.go:60] starting service controller\nI0804 23:03:35.446565       1 dnscontroller.go:170] scope not yet ready: node\ndns-controller version 0.1\nI0804 23:03:35.462639       1 dnscontroller.go:625] Update desired state: node/ip-172-20-63-249.eu-west-2.compute.internal: [{A node/ip-172-20-63-249.eu-west-2.compute.internal/internal 172.20.63.249 true} {A node/ip-172-20-63-249.eu-west-2.compute.internal/external 35.177.51.115 true} {A node/role=master/internal 172.20.63.249 true} {A node/role=master/external 35.177.51.115 true} {A node/role=master/ ip-172-20-63-249.eu-west-2.compute.internal true} {A node/role=master/ ip-172-20-63-249.eu-west-2.compute.internal true} {A node/role=master/ ec2-35-177-51-115.eu-west-2.compute.amazonaws.com true}]\nI0804 23:03:35.467481       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-9n9bw: [{A kops-controller.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io. 172.20.63.249 false}]\nI0804 23:03:40.447196       1 dnscache.go:74] querying all DNS zones (no cached results)\nI0804 23:03:40.885974       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0804 23:03:40.886000       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0804 23:03:43.055186       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io.} [172.20.63.249]\nI0804 23:03:43.055222       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0804 23:04:15.494728       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal: [{_alias api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io. node/ip-172-20-63-249.eu-west-2.compute.internal/external false}]\nI0804 23:04:18.290762       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0804 23:04:18.290790       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0804 23:04:20.594551       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io.} [35.177.51.115]\nI0804 23:04:20.594584       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0804 23:04:23.518396       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-63-249.eu-west-2.compute.internal: [{_alias api.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io. node/ip-172-20-63-249.eu-west-2.compute.internal/external false} {A api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io. 172.20.63.249 false}]\nI0804 23:04:25.837833       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0804 23:04:25.837896       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0804 23:04:27.839386       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io.} [172.20.63.249]\nI0804 23:04:27.839423       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0804 23:05:00.134595       1 dnscontroller.go:625] Update desired state: node/ip-172-20-46-233.eu-west-2.compute.internal: [{A node/ip-172-20-46-233.eu-west-2.compute.internal/internal 172.20.46.233 true} {A node/ip-172-20-46-233.eu-west-2.compute.internal/external 18.170.38.47 true} {A node/role=node/internal 172.20.46.233 true} {A node/role=node/external 18.170.38.47 true} {A node/role=node/ ip-172-20-46-233.eu-west-2.compute.internal true} {A node/role=node/ ip-172-20-46-233.eu-west-2.compute.internal true} {A node/role=node/ ec2-18-170-38-47.eu-west-2.compute.amazonaws.com true}]\nI0804 23:05:00.176460       1 dnscontroller.go:625] Update desired state: node/ip-172-20-61-222.eu-west-2.compute.internal: [{A node/ip-172-20-61-222.eu-west-2.compute.internal/internal 172.20.61.222 true} {A node/ip-172-20-61-222.eu-west-2.compute.internal/external 18.130.75.27 true} {A node/role=node/internal 172.20.61.222 true} {A node/role=node/external 18.130.75.27 true} {A node/role=node/ ip-172-20-61-222.eu-west-2.compute.internal true} {A node/role=node/ ip-172-20-61-222.eu-west-2.compute.internal true} {A node/role=node/ ec2-18-130-75-27.eu-west-2.compute.amazonaws.com true}]\nI0804 23:05:00.187065       1 dnscontroller.go:625] Update desired state: node/ip-172-20-45-94.eu-west-2.compute.internal: [{A node/ip-172-20-45-94.eu-west-2.compute.internal/internal 172.20.45.94 true} {A node/ip-172-20-45-94.eu-west-2.compute.internal/external 18.170.74.80 true} {A node/role=node/internal 172.20.45.94 true} {A node/role=node/external 18.170.74.80 true} {A node/role=node/ ip-172-20-45-94.eu-west-2.compute.internal true} {A node/role=node/ ip-172-20-45-94.eu-west-2.compute.internal true} {A node/role=node/ ec2-18-170-74-80.eu-west-2.compute.amazonaws.com true}]\nI0804 23:05:00.282757       1 dnscontroller.go:625] Update desired state: node/ip-172-20-63-4.eu-west-2.compute.internal: [{A node/ip-172-20-63-4.eu-west-2.compute.internal/internal 172.20.63.4 true} {A node/ip-172-20-63-4.eu-west-2.compute.internal/external 18.132.37.241 true} {A node/role=node/internal 172.20.63.4 true} {A node/role=node/external 18.132.37.241 true} {A node/role=node/ ip-172-20-63-4.eu-west-2.compute.internal true} {A node/role=node/ ip-172-20-63-4.eu-west-2.compute.internal true} {A node/role=node/ ec2-18-132-37-241.eu-west-2.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-7f4474bbb-jv7hm ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal ====\netcd-manager\nI0804 23:02:53.654231    4328 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0804 23:02:53.655062    4328 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0804 23:02:53.655670    4328 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0804 23:02:53.656168    4328 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0804 23:02:53.656586    4328 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0804 23:02:53.657051    4328 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI0804 23:02:53.658305    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:02:53.792856    4328 mounter.go:304] Trying to mount master volume: \"vol-0801a5b191af51ee4\"\nI0804 23:02:53.792884    4328 volumes.go:331] Trying to attach volume \"vol-0801a5b191af51ee4\" at \"/dev/xvdu\"\nI0804 23:02:53.793022    4328 volumes.go:86] AWS API Request: ec2/AttachVolume\nW0804 23:02:54.102146    4328 volumes.go:343] Invalid value '/dev/xvdu' for unixDevice. Attachment point /dev/xvdu is already in use\nI0804 23:02:54.102171    4328 volumes.go:331] Trying to attach volume \"vol-0801a5b191af51ee4\" at \"/dev/xvdv\"\nI0804 23:02:54.102341    4328 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0804 23:02:54.501924    4328 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-08-04 23:02:54.491 +0000 UTC,\n  Device: \"/dev/xvdv\",\n  InstanceId: \"i-0d3e140994a889fb4\",\n  State: \"attaching\",\n  VolumeId: \"vol-0801a5b191af51ee4\"\n}\nI0804 23:02:54.502114    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:02:54.618852    4328 mounter.go:318] Currently attached volumes: [0xc00007f580]\nI0804 23:02:54.618878    4328 mounter.go:72] Master volume \"vol-0801a5b191af51ee4\" is attached at \"/dev/xvdv\"\nI0804 23:02:54.618894    4328 mounter.go:86] Doing safe-format-and-mount of /dev/xvdv to /mnt/master-vol-0801a5b191af51ee4\nI0804 23:02:54.618908    4328 volumes.go:234] volume vol-0801a5b191af51ee4 not mounted at /rootfs/dev/xvdv\nI0804 23:02:54.618940    4328 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0801a5b191af51ee4\"\nI0804 23:02:54.618962    4328 volumes.go:251] volume vol-0801a5b191af51ee4 not mounted at nvme-Amazon_Elastic_Block_Store_vol0801a5b191af51ee4\nI0804 23:02:54.618977    4328 mounter.go:121] Waiting for volume \"vol-0801a5b191af51ee4\" to be mounted\nI0804 23:02:55.619062    4328 volumes.go:234] volume vol-0801a5b191af51ee4 not mounted at /rootfs/dev/xvdv\nI0804 23:02:55.619105    4328 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0801a5b191af51ee4\" at \"/dev/nvme2n1\"\nI0804 23:02:55.619114    4328 mounter.go:125] Found volume \"vol-0801a5b191af51ee4\" mounted at device \"/dev/nvme2n1\"\nI0804 23:02:55.619710    4328 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0801a5b191af51ee4\"\nI0804 23:02:55.619769    4328 mounter.go:176] Mounting device \"/dev/nvme2n1\" on \"/mnt/master-vol-0801a5b191af51ee4\"\nI0804 23:02:55.619777    4328 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0804 23:02:55.619795    4328 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0804 23:02:55.635258    4328 mount_linux.go:449] Output: \"\"\nI0804 23:02:55.635279    4328 mount_linux.go:408] Disk \"/dev/nvme2n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme2n1]\nI0804 23:02:55.635295    4328 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme2n1]\nI0804 23:02:55.981917    4328 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme2n1 /mnt/master-vol-0801a5b191af51ee4\nI0804 23:02:55.981943    4328 mount_linux.go:436] Attempting to mount disk /dev/nvme2n1 in ext4 format at /mnt/master-vol-0801a5b191af51ee4\nI0804 23:02:55.981958    4328 nsenter.go:80] nsenter mount /dev/nvme2n1 /mnt/master-vol-0801a5b191af51ee4 ext4 [defaults]\nI0804 23:02:55.981981    4328 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0801a5b191af51ee4 --scope -- /bin/mount -t ext4 -o defaults /dev/nvme2n1 /mnt/master-vol-0801a5b191af51ee4]\nI0804 23:02:56.007082    4328 nsenter.go:84] Output of mounting /dev/nvme2n1 to /mnt/master-vol-0801a5b191af51ee4: Running scope as unit: run-r30e3a78b3de14747b85c6e1ad454279b.scope\nI0804 23:02:56.007112    4328 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0804 23:02:56.007134    4328 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0804 23:02:56.018347    4328 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme2n1\\nTYPE=ext4\\n\"\nI0804 23:02:56.018369    4328 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme2n1\nI0804 23:02:56.018381    4328 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme2n1]\nI0804 23:02:56.020012    4328 resizefs_linux.go:68] Device /dev/nvme2n1 resized successfully\nI0804 23:02:56.028569    4328 mount_linux.go:206] Detected OS with systemd\nI0804 23:02:56.029440    4328 mounter.go:262] device \"/dev/nvme2n1\" did not evaluate as a symlink: lstat /dev/nvme2n1: no such file or directory\nI0804 23:02:56.029468    4328 mounter.go:262] device \"/dev/nvme2n1\" did not evaluate as a symlink: lstat /dev/nvme2n1: no such file or directory\nI0804 23:02:56.029475    4328 mounter.go:242] matched device \"/dev/nvme2n1\" and \"/dev/nvme2n1\" via '\\x00'\nI0804 23:02:56.029484    4328 mounter.go:94] mounted master volume \"vol-0801a5b191af51ee4\" on /mnt/master-vol-0801a5b191af51ee4\nI0804 23:02:56.029495    4328 main.go:320] discovered IP address: 172.20.63.249\nI0804 23:02:56.029500    4328 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0801a5b191af51ee4\nI0804 23:02:56.341222    4328 certs.go:183] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI0804 23:02:56.469519    4328 certs.go:183] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI0804 23:02:56.471581    4328 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI0804 23:02:56.472080    4328 main.go:474] peerClientIPs: [172.20.63.249]\nI0804 23:02:56.658098    4328 certs.go:183] generating certificate for \"etcd-manager-etcd-events-a\"\nI0804 23:02:56.659938    4328 server.go:105] GRPC server listening on \"172.20.63.249:3997\"\nI0804 23:02:56.660266    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:02:56.761271    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:02:56.802197    4328 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.63.249 0} {172.20.63.249 0}]\nI0804 23:02:56.802250    4328 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:02:56.802436    4328 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI0804 23:02:58.660801    4328 controller.go:189] starting controller iteration\nI0804 23:02:58.661185    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:02:58.661371    4328 commands.go:41] refreshing commands\nI0804 23:02:58.661462    4328 s3context.go:334] product_uuid is \"ec26f7f8-a1b6-ff3f-7647-c086e89ccefd\", assuming running on EC2\nI0804 23:02:58.662747    4328 s3context.go:166] got region from metadata: \"eu-west-2\"\nI0804 23:02:58.688981    4328 s3context.go:213] found bucket in region \"us-west-1\"\nI0804 23:02:59.332640    4328 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0804 23:02:59.332668    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0804 23:03:09.484734    4328 controller.go:189] starting controller iteration\nI0804 23:03:09.484774    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:09.485024    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:09.485144    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:09.485404    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > }\nI0804 23:03:09.485463    4328 controller.go:303] etcd cluster members: map[]\nI0804 23:03:09.485473    4328 controller.go:641] sending member map to all peers: \nI0804 23:03:09.485660    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:09.485672    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:10.052604    4328 controller.go:359] detected that there is no existing cluster\nI0804 23:03:10.052627    4328 commands.go:41] refreshing commands\nI0804 23:03:10.270869    4328 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0804 23:03:10.270893    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0804 23:03:10.416800    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:10.417085    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:10.417113    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:10.417255    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:10.417406    4328 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > }]\nI0804 23:03:10.417807    4328 newcluster.go:153] JoinClusterResponse: \nI0804 23:03:10.418329    4328 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0804 23:03:10.418371    4328 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\nI0804 23:03:10.419045    4328 pki.go:59] adding peerClientIPs [172.20.63.249]\nI0804 23:03:10.419077    4328 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[172.20.63.249 127.0.0.1]} Usages:[2 1]}\nI0804 23:03:10.556416    4328 certs.go:183] generating certificate for \"etcd-events-a\"\nI0804 23:03:10.558386    4328 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0804 23:03:10.673588    4328 certs.go:183] generating certificate for \"etcd-events-a\"\nI0804 23:03:10.916430    4328 certs.go:183] generating certificate for \"etcd-events-a\"\nI0804 23:03:10.919642    4328 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0804 23:03:10.920231    4328 newcluster.go:171] JoinClusterResponse: \nI0804 23:03:10.920372    4328 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0804 23:03:10.920470    4328 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-08-04 23:03:10.926096 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\n2021-08-04 23:03:10.926145 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.crt\n2021-08-04 23:03:10.926153 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:10.926164 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\n2021-08-04 23:03:10.926177 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-08-04 23:03:10.926203 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\n2021-08-04 23:03:10.926208 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\n2021-08-04 23:03:10.926212 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-08-04 23:03:10.926218 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=nSTwYTz1pI5mSBHO3AhxXA\n2021-08-04 23:03:10.926223 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.key\n2021-08-04 23:03:10.926230 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-08-04 23:03:10.926237 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-08-04 23:03:10.926243 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-08-04 23:03:10.926255 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-08-04 23:03:10.926267 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-08-04 23:03:10.926274 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.crt\n2021-08-04 23:03:10.926279 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:10.926285 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.key\n2021-08-04 23:03:10.926289 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/ca.crt\n2021-08-04 23:03:10.926305 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/ca.crt\n2021-08-04 23:03:10.926315 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.926Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.926Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.crt, key = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.926Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.926Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"nSTwYTz1pI5mSBHO3AhxXA\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.930Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA/member/snap/db\",\"took\":\"2.90913ms\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.931Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.63.249:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.931Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.63.249:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.935Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"281e897f2714877e\",\"cluster-id\":\"96bf50b6f84e10ba\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.936Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"281e897f2714877e switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.936Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"281e897f2714877e became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.936Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 281e897f2714877e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.936Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"281e897f2714877e became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.936Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"281e897f2714877e switched to configuration voters=(2890899190027945854)\"}\n{\"level\":\"warn\",\"ts\":\"2021-08-04T23:03:10.939Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.944Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.946Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"281e897f2714877e\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.948Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.crt, key = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.948Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"281e897f2714877e\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.948Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"281e897f2714877e\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.948Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.949Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"281e897f2714877e switched to configuration voters=(2890899190027945854)\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.949Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"96bf50b6f84e10ba\",\"local-member-id\":\"281e897f2714877e\",\"added-peer-id\":\"281e897f2714877e\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"]}\nI0804 23:03:11.219509    4328 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:11.407890    4328 controller.go:189] starting controller iteration\nI0804 23:03:11.407920    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:11.408276    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:11.408392    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:11.409121    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995]\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.636Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"281e897f2714877e is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.636Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"281e897f2714877e became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.636Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"281e897f2714877e received MsgVoteResp from 281e897f2714877e at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.637Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"281e897f2714877e became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.637Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 281e897f2714877e elected leader 281e897f2714877e at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.637Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"281e897f2714877e\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/281e897f2714877e/attributes\",\"cluster-id\":\"96bf50b6f84e10ba\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.637Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.638Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.645Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"96bf50b6f84e10ba\",\"local-member-id\":\"281e897f2714877e\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.646Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.646Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\nI0804 23:03:11.653317    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0804 23:03:11.653485    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:03:11.653511    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:11.653685    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:11.653698    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:11.653751    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:11.653841    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:11.653853    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:11.802777    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:11.803478    4328 backup.go:134] performing snapshot save to /tmp/181293852/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.810Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.810Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.810Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.811Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.811Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0804 23:03:11.812008    4328 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/2021-08-04T23:03:11Z-000001/etcd.backup.gz\"\nI0804 23:03:11.990706    4328 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/2021-08-04T23:03:11Z-000001/_etcd_backup.meta\"\nI0804 23:03:12.146726    4328 backup.go:159] backup complete: name:\"2021-08-04T23:03:11Z-000001\" \nI0804 23:03:12.147248    4328 controller.go:937] backup response: name:\"2021-08-04T23:03:11Z-000001\" \nI0804 23:03:12.147265    4328 controller.go:576] took backup: name:\"2021-08-04T23:03:11Z-000001\" \nI0804 23:03:12.299413    4328 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events: [2021-08-04T23:03:11Z-000001]\nI0804 23:03:12.299444    4328 cleanup.go:166] retaining backup \"2021-08-04T23:03:11Z-000001\"\nI0804 23:03:12.299466    4328 restore.go:98] Setting quarantined state to false\nI0804 23:03:12.299807    4328 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" cluster_name:\"etcd-events\" > \nI0804 23:03:12.299924    4328 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" cluster_name:\"etcd-events\" > \nI0804 23:03:12.299940    4328 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\nI0804 23:03:12.301249    4328 etcdprocess.go:131] Waiting for etcd to exit\nI0804 23:03:12.401477    4328 etcdprocess.go:131] Waiting for etcd to exit\nI0804 23:03:12.401511    4328 etcdprocess.go:136] Exited etcd: signal: killed\nI0804 23:03:12.401684    4328 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0804 23:03:12.401822    4328 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0804 23:03:12.401842    4328 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0804 23:03:12.401878    4328 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\nI0804 23:03:12.401958    4328 pki.go:59] adding peerClientIPs [172.20.63.249]\nI0804 23:03:12.401978    4328 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[172.20.63.249 127.0.0.1]} Usages:[2 1]}\nI0804 23:03:12.402215    4328 certs.go:122] existing certificate not valid after 2023-08-04T23:03:10Z; will regenerate\nI0804 23:03:12.402227    4328 certs.go:183] generating certificate for \"etcd-events-a\"\nI0804 23:03:12.404139    4328 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0804 23:03:12.404310    4328 certs.go:122] existing certificate not valid after 2023-08-04T23:03:10Z; will regenerate\nI0804 23:03:12.404321    4328 certs.go:183] generating certificate for \"etcd-events-a\"\nI0804 23:03:12.596042    4328 certs.go:183] generating certificate for \"etcd-events-a\"\nI0804 23:03:12.597839    4328 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0804 23:03:12.598307    4328 restore.go:116] ReconfigureResponse: \nI0804 23:03:12.599499    4328 controller.go:189] starting controller iteration\nI0804 23:03:12.599521    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:12.599734    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:12.599840    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:12.600255    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\n2021-08-04 23:03:12.604437 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\n2021-08-04 23:03:12.604550 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.crt\n2021-08-04 23:03:12.604561 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:12.604570 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\n2021-08-04 23:03:12.604579 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-08-04 23:03:12.604662 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\n2021-08-04 23:03:12.604670 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\n2021-08-04 23:03:12.604674 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-08-04 23:03:12.604680 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=nSTwYTz1pI5mSBHO3AhxXA\n2021-08-04 23:03:12.604768 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.key\n2021-08-04 23:03:12.604782 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4002\n2021-08-04 23:03:12.604839 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-08-04 23:03:12.604848 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-08-04 23:03:12.604886 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-08-04 23:03:12.604916 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-08-04 23:03:12.604966 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.crt\n2021-08-04 23:03:12.604975 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:12.604981 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.key\n2021-08-04 23:03:12.604986 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/ca.crt\n2021-08-04 23:03:12.605057 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/ca.crt\n2021-08-04 23:03:12.605120 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.605Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.605Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.605Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.crt, key = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.605Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.606Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.606Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0801a5b191af51ee4/data/nSTwYTz1pI5mSBHO3AhxXA/member/snap/db\",\"took\":\"103.973µs\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.607Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"96bf50b6f84e10ba\",\"local-member-id\":\"281e897f2714877e\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.607Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"281e897f2714877e switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.607Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"281e897f2714877e became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.607Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 281e897f2714877e [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-08-04T23:03:12.608Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.610Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.611Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"281e897f2714877e\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.611Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.611Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"281e897f2714877e switched to configuration voters=(2890899190027945854)\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.613Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"96bf50b6f84e10ba\",\"local-member-id\":\"281e897f2714877e\",\"added-peer-id\":\"281e897f2714877e\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.613Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"96bf50b6f84e10ba\",\"local-member-id\":\"281e897f2714877e\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.613Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.614Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.crt, key = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0801a5b191af51ee4/pki/nSTwYTz1pI5mSBHO3AhxXA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.614Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.614Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"281e897f2714877e\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.207Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"281e897f2714877e is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.208Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"281e897f2714877e became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.208Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"281e897f2714877e received MsgVoteResp from 281e897f2714877e at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.208Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"281e897f2714877e became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.208Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 281e897f2714877e elected leader 281e897f2714877e at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.208Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"281e897f2714877e\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]}\",\"request-path\":\"/0/members/281e897f2714877e/attributes\",\"cluster-id\":\"96bf50b6f84e10ba\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:14.209Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\nI0804 23:03:14.245490    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:14.245601    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:03:14.246217    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:14.246453    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:14.246469    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:14.246519    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:14.246608    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:14.246620    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:14.393095    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:14.393176    4328 controller.go:557] controller loop complete\nI0804 23:03:24.394332    4328 controller.go:189] starting controller iteration\nI0804 23:03:24.394382    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:24.394646    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:24.394768    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:24.395350    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:03:24.414200    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:24.414703    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:03:24.414912    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:24.415303    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:24.415521    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:24.415706    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:24.415912    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:24.416023    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:24.987719    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:24.987856    4328 controller.go:557] controller loop complete\nI0804 23:03:34.989045    4328 controller.go:189] starting controller iteration\nI0804 23:03:34.989076    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:34.989329    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:34.989474    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:34.989849    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:03:35.007672    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:35.007803    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:03:35.007829    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:35.008109    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:35.008142    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:35.008216    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:35.008319    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:35.008334    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:35.580782    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:35.580873    4328 controller.go:557] controller loop complete\nI0804 23:03:45.582455    4328 controller.go:189] starting controller iteration\nI0804 23:03:45.582498    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:45.582751    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:45.582930    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:45.583542    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:03:45.594659    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:45.594742    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:03:45.594762    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:45.594944    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:45.594960    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:45.595015    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:45.595096    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:45.595110    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:46.163039    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:46.163123    4328 controller.go:557] controller loop complete\nI0804 23:03:56.165222    4328 controller.go:189] starting controller iteration\nI0804 23:03:56.165260    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:56.165537    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:03:56.165718    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:03:56.166299    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:03:56.179613    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:56.179799    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:03:56.179825    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:56.179984    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:56.179998    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:56.180054    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:56.180133    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:56.180147    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:03:56.744089    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:56.744170    4328 controller.go:557] controller loop complete\nI0804 23:03:56.809315    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:03:56.926118    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:03:56.990632    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:56.990711    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:06.746017    4328 controller.go:189] starting controller iteration\nI0804 23:04:06.746054    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:06.746294    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:04:06.746418    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:06.746718    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:04:06.757781    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:06.757868    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:04:06.757899    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:06.758257    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:06.758274    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:06.758322    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:06.758403    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:06.758414    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:04:07.332014    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:07.332130    4328 controller.go:557] controller loop complete\nI0804 23:04:17.333317    4328 controller.go:189] starting controller iteration\nI0804 23:04:17.333498    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:17.333806    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:04:17.334047    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:17.334728    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:04:17.351510    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:17.351764    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:04:17.351852    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:17.352262    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:17.352368    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:17.352512    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:17.352667    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:17.352767    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:04:17.924408    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:17.924485    4328 controller.go:557] controller loop complete\nI0804 23:04:27.926511    4328 controller.go:189] starting controller iteration\nI0804 23:04:27.926549    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:27.926755    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:04:27.926868    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:27.927397    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:04:27.940240    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:27.940329    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:04:27.940623    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:27.940829    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:27.940845    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:27.940982    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:27.941079    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:27.941160    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:04:28.505511    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:28.505645    4328 controller.go:557] controller loop complete\nI0804 23:04:38.507877    4328 controller.go:189] starting controller iteration\nI0804 23:04:38.507912    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:38.508174    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:04:38.508308    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:38.508661    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:04:38.519415    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:38.519520    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:04:38.519658    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:38.519848    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:38.519913    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:38.519981    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:38.520082    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:38.520111    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:04:39.094153    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:39.094298    4328 controller.go:557] controller loop complete\nI0804 23:04:49.096411    4328 controller.go:189] starting controller iteration\nI0804 23:04:49.096450    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:49.096742    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:04:49.096889    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:49.097411    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:04:49.112456    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:49.112775    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:04:49.112881    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:49.113132    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:49.113148    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:49.113231    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:49.113398    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:49.113480    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:04:49.683245    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:49.683318    4328 controller.go:557] controller loop complete\nI0804 23:04:56.991400    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:04:57.105291    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:04:57.144872    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:57.144974    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:59.685480    4328 controller.go:189] starting controller iteration\nI0804 23:04:59.685565    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:59.685875    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:04:59.686075    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:04:59.686769    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:04:59.701557    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:59.701677    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:04:59.701754    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:59.702051    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:59.702068    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:59.702196    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:59.702322    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:59.702365    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:05:00.279998    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:00.280112    4328 controller.go:557] controller loop complete\nI0804 23:05:10.281950    4328 controller.go:189] starting controller iteration\nI0804 23:05:10.281988    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:10.282296    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:05:10.282459    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:10.283007    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:05:10.305942    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:10.306051    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:05:10.306069    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:10.306293    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:10.306309    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:10.306362    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:10.306496    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:10.306509    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:05:10.873166    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:10.873292    4328 controller.go:557] controller loop complete\nI0804 23:05:20.874718    4328 controller.go:189] starting controller iteration\nI0804 23:05:20.874892    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:20.875175    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:05:20.875306    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:20.875812    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:05:20.886732    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:20.886926    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:05:20.886950    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:20.887233    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:20.887249    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:20.887409    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:20.887537    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:20.887613    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:05:21.454197    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:21.454274    4328 controller.go:557] controller loop complete\nI0804 23:05:31.456470    4328 controller.go:189] starting controller iteration\nI0804 23:05:31.456508    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:31.456792    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:05:31.456949    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:31.457900    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:05:31.477670    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:31.477750    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:05:31.477769    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:31.477956    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:31.477969    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:31.478022    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:31.478099    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:31.478111    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:05:32.051418    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:32.051493    4328 controller.go:557] controller loop complete\nI0804 23:05:42.053382    4328 controller.go:189] starting controller iteration\nI0804 23:05:42.053419    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:42.053715    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:05:42.053849    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:42.054254    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:05:42.065292    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:42.065411    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:05:42.065433    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:42.065617    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:42.065632    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:42.065675    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:42.065742    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:42.065757    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:05:42.638321    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:42.638392    4328 controller.go:557] controller loop complete\nI0804 23:05:52.640147    4328 controller.go:189] starting controller iteration\nI0804 23:05:52.640189    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:52.640601    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:05:52.640822    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:05:52.641778    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:05:52.667224    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:52.667501    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:05:52.667528    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:52.667782    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:52.667817    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:52.667877    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:52.668024    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:52.668077    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:05:53.237834    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:53.237910    4328 controller.go:557] controller loop complete\nI0804 23:05:57.145353    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:05:57.267554    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:05:57.340938    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:57.341019    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:03.240152    4328 controller.go:189] starting controller iteration\nI0804 23:06:03.240190    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:03.240406    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:06:03.240520    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:03.240934    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:06:03.254310    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:03.254502    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:06:03.254550    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:03.254841    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:03.254904    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:03.255102    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:03.255194    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:03.255207    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:06:03.823790    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:03.823868    4328 controller.go:557] controller loop complete\nI0804 23:06:13.825037    4328 controller.go:189] starting controller iteration\nI0804 23:06:13.825073    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:13.825368    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:06:13.825494    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:13.825821    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:06:13.836702    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:13.836785    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:06:13.836802    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:13.836969    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:13.836987    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:13.837032    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:13.837097    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:13.837111    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:06:14.399811    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:14.399958    4328 controller.go:557] controller loop complete\nI0804 23:06:24.401140    4328 controller.go:189] starting controller iteration\nI0804 23:06:24.401176    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:24.401528    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:06:24.401636    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:24.402294    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:06:24.416410    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:24.416489    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:06:24.416651    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:24.416949    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:24.416964    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:24.417035    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:24.417165    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:24.417180    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:06:24.979159    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:24.979254    4328 controller.go:557] controller loop complete\nI0804 23:06:34.981143    4328 controller.go:189] starting controller iteration\nI0804 23:06:34.981183    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:34.981542    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:06:34.981724    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:34.982216    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:06:34.993038    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:34.993204    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:06:34.993274    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:34.993468    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:34.993482    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:34.993690    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:34.993801    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:34.993816    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:06:35.552792    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:35.552867    4328 controller.go:557] controller loop complete\nI0804 23:06:45.554413    4328 controller.go:189] starting controller iteration\nI0804 23:06:45.554453    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:45.554726    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:06:45.554866    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:45.555286    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:06:45.566524    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:45.566621    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:06:45.566640    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:45.566831    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:45.566847    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:45.566898    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:45.566961    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:45.566975    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:06:46.122926    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:46.123018    4328 controller.go:557] controller loop complete\nI0804 23:06:56.124472    4328 controller.go:189] starting controller iteration\nI0804 23:06:56.124509    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:56.124882    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:06:56.125052    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:06:56.125555    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:06:56.136526    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:56.136718    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:06:56.136748    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:56.136943    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:56.136967    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:56.137032    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:56.137132    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:56.137147    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:06:56.707280    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:56.707358    4328 controller.go:557] controller loop complete\nI0804 23:06:57.341242    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:06:57.466656    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:06:57.528843    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:57.529008    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:06.708879    4328 controller.go:189] starting controller iteration\nI0804 23:07:06.708921    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:06.709305    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:07:06.709514    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:06.710603    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:07:06.725821    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:07:06.725899    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:07:06.725919    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:07:06.726061    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:06.726074    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:06.726121    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:06.726179    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:07:06.726192    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:07:07.290160    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:07:07.290248    4328 controller.go:557] controller loop complete\nI0804 23:07:17.291606    4328 controller.go:189] starting controller iteration\nI0804 23:07:17.291712    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:17.291974    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:07:17.292155    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:17.292569    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:07:17.310271    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:07:17.310369    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:07:17.310447    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:07:17.310668    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:17.310684    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:17.310799    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:17.310905    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:07:17.310939    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:07:17.877054    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:07:17.877129    4328 controller.go:557] controller loop complete\nI0804 23:07:27.878789    4328 controller.go:189] starting controller iteration\nI0804 23:07:27.878835    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:27.879093    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:07:27.879222    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:27.880118    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:07:27.895516    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:07:27.895612    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:07:27.895636    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:07:27.895814    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:27.895832    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:27.895888    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:27.895957    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:07:27.895972    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:07:28.462978    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:07:28.463060    4328 controller.go:557] controller loop complete\nI0804 23:07:38.465236    4328 controller.go:189] starting controller iteration\nI0804 23:07:38.465275    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:38.465504    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:07:38.465625    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:38.466013    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:07:38.476946    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:07:38.477029    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:07:38.477047    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:07:38.477362    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:38.477379    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:38.477504    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:38.477618    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:07:38.477767    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:07:39.055973    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:07:39.056052    4328 controller.go:557] controller loop complete\nI0804 23:07:49.057207    4328 controller.go:189] starting controller iteration\nI0804 23:07:49.057248    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:49.057448    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:07:49.057558    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:49.058405    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:07:49.077951    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:07:49.078050    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:07:49.078071    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:07:49.078295    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:49.078311    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:49.078365    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:49.078445    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:07:49.078457    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:07:49.650742    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:07:49.650923    4328 controller.go:557] controller loop complete\nI0804 23:07:57.529705    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:07:57.757810    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:07:57.838706    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:57.838978    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:59.652767    4328 controller.go:189] starting controller iteration\nI0804 23:07:59.652810    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:59.653031    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:07:59.653149    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:07:59.653569    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:07:59.684587    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:07:59.684878    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:07:59.685003    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:07:59.685470    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:59.685508    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:07:59.685672    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:07:59.685866    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:07:59.685899    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:08:00.250631    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:08:00.250740    4328 controller.go:557] controller loop complete\nI0804 23:08:10.253681    4328 controller.go:189] starting controller iteration\nI0804 23:08:10.253717    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:10.253956    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:08:10.254104    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:10.254482    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:08:10.284975    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:08:10.285089    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:08:10.285110    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:08:10.285341    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:10.285356    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:10.285415    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:08:10.285495    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:08:10.285506    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:08:10.863426    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:08:10.863501    4328 controller.go:557] controller loop complete\nI0804 23:08:20.865336    4328 controller.go:189] starting controller iteration\nI0804 23:08:20.865372    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:20.865726    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:08:20.865932    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:20.866557    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:08:20.883450    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:08:20.883551    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:08:20.883570    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:08:20.883777    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:20.883790    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:20.883842    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:08:20.883918    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:08:20.883930    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:08:21.453050    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:08:21.453134    4328 controller.go:557] controller loop complete\nI0804 23:08:31.454324    4328 controller.go:189] starting controller iteration\nI0804 23:08:31.454427    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:31.454757    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:08:31.455037    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:31.455524    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:08:31.467794    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:08:31.467902    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:08:31.467922    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:08:31.468132    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:31.468148    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:31.468202    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:08:31.468277    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:08:31.468288    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:08:32.039546    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:08:32.039682    4328 controller.go:557] controller loop complete\nI0804 23:08:42.041717    4328 controller.go:189] starting controller iteration\nI0804 23:08:42.041753    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:42.042205    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:08:42.042544    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:42.043318    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:08:42.064026    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:08:42.064111    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:08:42.064337    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:08:42.064629    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:42.064646    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:42.064712    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:08:42.064852    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:08:42.064867    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:08:42.631285    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:08:42.631374    4328 controller.go:557] controller loop complete\nI0804 23:08:52.634041    4328 controller.go:189] starting controller iteration\nI0804 23:08:52.634081    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:52.634272    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:08:52.634377    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:08:52.634676    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:08:52.645846    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:08:52.645944    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:08:52.645962    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:08:52.646340    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:52.646421    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:52.646516    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:08:52.646634    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:08:52.646648    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:08:53.212460    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:08:53.212536    4328 controller.go:557] controller loop complete\nI0804 23:08:57.840052    4328 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:08:57.953405    4328 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:08:57.992445    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:08:57.992553    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:09:03.214238    4328 controller.go:189] starting controller iteration\nI0804 23:09:03.214278    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:09:03.214507    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:09:03.214648    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:09:03.215105    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:09:03.236846    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:09:03.236947    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:09:03.236962    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:09:03.237131    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:09:03.237141    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:09:03.237184    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:09:03.237241    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:09:03.237250    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:09:03.804913    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:09:03.805039    4328 controller.go:557] controller loop complete\nI0804 23:09:13.807515    4328 controller.go:189] starting controller iteration\nI0804 23:09:13.807607    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:09:13.807959    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:09:13.808101    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:09:13.808612    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:09:13.832531    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:09:13.832895    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:09:13.832990    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:09:13.833408    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:09:13.833426    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:09:13.833561    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:09:13.834950    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:09:13.834966    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:09:14.395606    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:09:14.395745    4328 controller.go:557] controller loop complete\nI0804 23:09:24.397374    4328 controller.go:189] starting controller iteration\nI0804 23:09:24.397410    4328 controller.go:266] Broadcasting leadership assertion with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:09:24.397647    4328 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > leadership_token:\"3vB3uqeYR07YTrYh3fWwsg\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" > > \nI0804 23:09:24.397778    4328 controller.go:295] I am leader with token \"3vB3uqeYR07YTrYh3fWwsg\"\nI0804 23:09:24.398149    4328 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002]\nI0804 23:09:24.417292    4328 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.63.249:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"nSTwYTz1pI5mSBHO3AhxXA\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:09:24.417394    4328 controller.go:303] etcd cluster members: map[2890899190027945854:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4002\"],\"ID\":\"2890899190027945854\"}]\nI0804 23:09:24.417413    4328 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:09:24.418085    4328 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:09:24.418112    4328 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-events-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:09:24.418163    4328 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:09:24.418253    4328 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:09:24.418367    4328 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0804 23:09:24.990210    4328 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:09:24.990286    4328 controller.go:557] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-63-249.eu-west-2.compute.internal ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-63-249.eu-west-2.compute.internal ====\netcd-manager\nI0804 23:02:52.783462    4200 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0804 23:02:52.784043    4200 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0804 23:02:52.784657    4200 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0804 23:02:52.785149    4200 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0804 23:02:52.785639    4200 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0804 23:02:52.786183    4200 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/main k8s.io/role/master=1 kubernetes.io/cluster/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/main\nI0804 23:02:52.787686    4200 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:02:52.904892    4200 mounter.go:304] Trying to mount master volume: \"vol-0d180d7f5abe9b153\"\nI0804 23:02:52.904917    4200 volumes.go:331] Trying to attach volume \"vol-0d180d7f5abe9b153\" at \"/dev/xvdu\"\nI0804 23:02:52.905147    4200 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0804 23:02:53.312130    4200 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-08-04 23:02:53.302 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-0d3e140994a889fb4\",\n  State: \"attaching\",\n  VolumeId: \"vol-0d180d7f5abe9b153\"\n}\nI0804 23:02:53.312298    4200 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:02:53.421181    4200 mounter.go:318] Currently attached volumes: [0xc000526a80]\nI0804 23:02:53.421207    4200 mounter.go:72] Master volume \"vol-0d180d7f5abe9b153\" is attached at \"/dev/xvdu\"\nI0804 23:02:53.421886    4200 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0d180d7f5abe9b153\nI0804 23:02:53.421905    4200 volumes.go:234] volume vol-0d180d7f5abe9b153 not mounted at /rootfs/dev/xvdu\nI0804 23:02:53.422004    4200 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0d180d7f5abe9b153\"\nI0804 23:02:53.422026    4200 volumes.go:251] volume vol-0d180d7f5abe9b153 not mounted at nvme-Amazon_Elastic_Block_Store_vol0d180d7f5abe9b153\nI0804 23:02:53.422031    4200 mounter.go:121] Waiting for volume \"vol-0d180d7f5abe9b153\" to be mounted\nI0804 23:02:54.422133    4200 volumes.go:234] volume vol-0d180d7f5abe9b153 not mounted at /rootfs/dev/xvdu\nI0804 23:02:54.422187    4200 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0d180d7f5abe9b153\" at \"/dev/nvme1n1\"\nI0804 23:02:54.422227    4200 mounter.go:125] Found volume \"vol-0d180d7f5abe9b153\" mounted at device \"/dev/nvme1n1\"\nI0804 23:02:54.423214    4200 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0d180d7f5abe9b153\"\nI0804 23:02:54.423282    4200 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-0d180d7f5abe9b153\"\nI0804 23:02:54.423296    4200 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0804 23:02:54.423320    4200 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0804 23:02:54.442911    4200 mount_linux.go:449] Output: \"\"\nI0804 23:02:54.442936    4200 mount_linux.go:408] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI0804 23:02:54.442972    4200 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI0804 23:02:55.028278    4200 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-0d180d7f5abe9b153\nI0804 23:02:55.028300    4200 mount_linux.go:436] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-0d180d7f5abe9b153\nI0804 23:02:55.028314    4200 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-0d180d7f5abe9b153 ext4 [defaults]\nI0804 23:02:55.028345    4200 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0d180d7f5abe9b153 --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-0d180d7f5abe9b153]\nI0804 23:02:55.069211    4200 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-0d180d7f5abe9b153: Running scope as unit: run-r620146b1485945828f405fa0aecb24ea.scope\nI0804 23:02:55.069239    4200 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0804 23:02:55.069260    4200 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0804 23:02:55.086578    4200 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI0804 23:02:55.086605    4200 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI0804 23:02:55.086616    4200 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI0804 23:02:55.101177    4200 resizefs_linux.go:68] Device /dev/nvme1n1 resized successfully\nI0804 23:02:55.110189    4200 mount_linux.go:206] Detected OS with systemd\nI0804 23:02:55.111071    4200 mounter.go:262] device \"/dev/nvme1n1\" did not evaluate as a symlink: lstat /dev/nvme1n1: no such file or directory\nI0804 23:02:55.111098    4200 mounter.go:262] device \"/dev/nvme1n1\" did not evaluate as a symlink: lstat /dev/nvme1n1: no such file or directory\nI0804 23:02:55.111104    4200 mounter.go:242] matched device \"/dev/nvme1n1\" and \"/dev/nvme1n1\" via '\\x00'\nI0804 23:02:55.111113    4200 mounter.go:94] mounted master volume \"vol-0d180d7f5abe9b153\" on /mnt/master-vol-0d180d7f5abe9b153\nI0804 23:02:55.111122    4200 main.go:320] discovered IP address: 172.20.63.249\nI0804 23:02:55.111126    4200 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0d180d7f5abe9b153\nI0804 23:02:55.393486    4200 certs.go:183] generating certificate for \"etcd-manager-server-etcd-a\"\nI0804 23:02:55.604015    4200 certs.go:183] generating certificate for \"etcd-manager-client-etcd-a\"\nI0804 23:02:55.607224    4200 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-a\"\nI0804 23:02:55.607721    4200 main.go:474] peerClientIPs: [172.20.63.249]\nI0804 23:02:55.661063    4200 certs.go:183] generating certificate for \"etcd-manager-etcd-a\"\nI0804 23:02:55.663642    4200 server.go:105] GRPC server listening on \"172.20.63.249:3996\"\nI0804 23:02:55.664050    4200 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:02:55.800343    4200 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:02:55.840960    4200 peers.go:115] found new candidate peer from discovery: etcd-a [{172.20.63.249 0} {172.20.63.249 0}]\nI0804 23:02:55.841040    4200 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:02:55.841103    4200 peers.go:295] connecting to peer \"etcd-a\" with TLS policy, servername=\"etcd-manager-server-etcd-a\"\nI0804 23:02:57.664816    4200 controller.go:189] starting controller iteration\nI0804 23:02:57.665210    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:02:57.665371    4200 commands.go:41] refreshing commands\nI0804 23:02:57.665457    4200 s3context.go:334] product_uuid is \"ec26f7f8-a1b6-ff3f-7647-c086e89ccefd\", assuming running on EC2\nI0804 23:02:57.666719    4200 s3context.go:166] got region from metadata: \"eu-west-2\"\nI0804 23:02:57.694263    4200 s3context.go:213] found bucket in region \"us-west-1\"\nI0804 23:02:58.338958    4200 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0804 23:02:58.338984    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0804 23:03:08.491303    4200 controller.go:189] starting controller iteration\nI0804 23:03:08.491343    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:08.491577    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:08.491699    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:08.491946    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > }\nI0804 23:03:08.492014    4200 controller.go:303] etcd cluster members: map[]\nI0804 23:03:08.492023    4200 controller.go:641] sending member map to all peers: \nI0804 23:03:08.492214    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:08.492230    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:09.055243    4200 controller.go:359] detected that there is no existing cluster\nI0804 23:03:09.055264    4200 commands.go:41] refreshing commands\nI0804 23:03:09.274413    4200 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0804 23:03:09.274443    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0804 23:03:09.421325    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:09.421594    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:09.421614    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:09.421760    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:09.421862    4200 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > }]\nI0804 23:03:09.422170    4200 newcluster.go:153] JoinClusterResponse: \nI0804 23:03:09.422836    4200 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0804 23:03:09.422874    4200 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\nI0804 23:03:09.423285    4200 pki.go:59] adding peerClientIPs [172.20.63.249]\nI0804 23:03:09.423306    4200 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[172.20.63.249 127.0.0.1]} Usages:[2 1]}\nI0804 23:03:09.542180    4200 certs.go:183] generating certificate for \"etcd-a\"\nI0804 23:03:09.544076    4200 pki.go:110] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0804 23:03:09.611682    4200 certs.go:183] generating certificate for \"etcd-a\"\nI0804 23:03:09.750450    4200 certs.go:183] generating certificate for \"etcd-a\"\nI0804 23:03:09.752319    4200 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0804 23:03:09.752931    4200 newcluster.go:171] JoinClusterResponse: \nI0804 23:03:09.753131    4200 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0804 23:03:09.753219    4200 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-08-04 23:03:09.758993 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\n2021-08-04 23:03:09.759023 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.crt\n2021-08-04 23:03:09.759030 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:09.759039 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\n2021-08-04 23:03:09.759052 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-08-04 23:03:09.759087 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\n2021-08-04 23:03:09.759093 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\n2021-08-04 23:03:09.759097 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-08-04 23:03:09.759106 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=TA6kpzqEegCfGTze0ClELA\n2021-08-04 23:03:09.759111 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.key\n2021-08-04 23:03:09.759118 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3994\n2021-08-04 23:03:09.759125 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-08-04 23:03:09.759131 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-08-04 23:03:09.759139 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-08-04 23:03:09.759179 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-08-04 23:03:09.759187 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.crt\n2021-08-04 23:03:09.759191 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:09.759198 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.key\n2021-08-04 23:03:09.759203 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/ca.crt\n2021-08-04 23:03:09.759216 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/ca.crt\n2021-08-04 23:03:09.759227 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.760Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.760Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.crt, key = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.760Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3994\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.760Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-a=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"TA6kpzqEegCfGTze0ClELA\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.764Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA/member/snap/db\",\"took\":\"2.950408ms\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.764Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.63.249:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.764Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.63.249:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.769Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"88badf56b7e30f53\",\"cluster-id\":\"fcc2a080984a9652\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.769Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"88badf56b7e30f53 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.769Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"88badf56b7e30f53 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.769Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 88badf56b7e30f53 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.769Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"88badf56b7e30f53 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.769Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"88badf56b7e30f53 switched to configuration voters=(9852432698371673939)\"}\n{\"level\":\"warn\",\"ts\":\"2021-08-04T23:03:09.772Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.776Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.778Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"88badf56b7e30f53\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.778Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"88badf56b7e30f53 switched to configuration voters=(9852432698371673939)\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.779Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"fcc2a080984a9652\",\"local-member-id\":\"88badf56b7e30f53\",\"added-peer-id\":\"88badf56b7e30f53\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.779Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"88badf56b7e30f53\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.780Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.crt, key = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.780Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"88badf56b7e30f53\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.780Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.969Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"88badf56b7e30f53 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.969Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"88badf56b7e30f53 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.969Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"88badf56b7e30f53 received MsgVoteResp from 88badf56b7e30f53 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.969Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"88badf56b7e30f53 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.969Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 88badf56b7e30f53 elected leader 88badf56b7e30f53 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.970Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.970Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"fcc2a080984a9652\",\"local-member-id\":\"88badf56b7e30f53\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.970Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"88badf56b7e30f53\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994]}\",\"request-path\":\"/0/members/88badf56b7e30f53/attributes\",\"cluster-id\":\"fcc2a080984a9652\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.971Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.971Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:09.972Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3994\"}\nI0804 23:03:10.066340    4200 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:10.243458    4200 controller.go:189] starting controller iteration\nI0804 23:03:10.243530    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:10.243787    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:10.244000    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:10.244557    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994]\nI0804 23:03:10.258106    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0804 23:03:10.258235    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:03:10.258268    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:10.258480    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:10.258515    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:10.258600    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:10.258705    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:10.258740    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:10.408353    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:10.409033    4200 backup.go:134] performing snapshot save to /tmp/383885978/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.414Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.414Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.414Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.415Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:10.416Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0804 23:03:10.416197    4200 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/2021-08-04T23:03:10Z-000001/etcd.backup.gz\"\nI0804 23:03:10.591645    4200 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/2021-08-04T23:03:10Z-000001/_etcd_backup.meta\"\nI0804 23:03:10.750839    4200 backup.go:159] backup complete: name:\"2021-08-04T23:03:10Z-000001\" \nI0804 23:03:10.751680    4200 controller.go:937] backup response: name:\"2021-08-04T23:03:10Z-000001\" \nI0804 23:03:10.751830    4200 controller.go:576] took backup: name:\"2021-08-04T23:03:10Z-000001\" \nI0804 23:03:10.907419    4200 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main: [2021-08-04T23:03:10Z-000001]\nI0804 23:03:10.907462    4200 cleanup.go:166] retaining backup \"2021-08-04T23:03:10Z-000001\"\nI0804 23:03:10.907486    4200 restore.go:98] Setting quarantined state to false\nI0804 23:03:10.907742    4200 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" cluster_name:\"etcd\" > \nI0804 23:03:10.907777    4200 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" cluster_name:\"etcd\" > \nI0804 23:03:10.907786    4200 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\nI0804 23:03:10.907968    4200 etcdprocess.go:131] Waiting for etcd to exit\nI0804 23:03:11.008207    4200 etcdprocess.go:131] Waiting for etcd to exit\nI0804 23:03:11.008231    4200 etcdprocess.go:136] Exited etcd: signal: killed\nI0804 23:03:11.008440    4200 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0804 23:03:11.008563    4200 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0804 23:03:11.008576    4200 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0804 23:03:11.008609    4200 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\nI0804 23:03:11.008703    4200 pki.go:59] adding peerClientIPs [172.20.63.249]\nI0804 23:03:11.008724    4200 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[172.20.63.249 127.0.0.1]} Usages:[2 1]}\nI0804 23:03:11.008962    4200 certs.go:122] existing certificate not valid after 2023-08-04T23:03:09Z; will regenerate\nI0804 23:03:11.008972    4200 certs.go:183] generating certificate for \"etcd-a\"\nI0804 23:03:11.010857    4200 pki.go:110] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0804 23:03:11.011028    4200 certs.go:122] existing certificate not valid after 2023-08-04T23:03:09Z; will regenerate\nI0804 23:03:11.011039    4200 certs.go:183] generating certificate for \"etcd-a\"\nI0804 23:03:11.160295    4200 certs.go:183] generating certificate for \"etcd-a\"\nI0804 23:03:11.162176    4200 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0804 23:03:11.162648    4200 restore.go:116] ReconfigureResponse: \nI0804 23:03:11.163792    4200 controller.go:189] starting controller iteration\nI0804 23:03:11.163816    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:11.164142    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:11.164257    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:11.164647    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\n2021-08-04 23:03:11.168946 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\n2021-08-04 23:03:11.169049 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.crt\n2021-08-04 23:03:11.169061 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:11.169143 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\n2021-08-04 23:03:11.169157 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-08-04 23:03:11.169223 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\n2021-08-04 23:03:11.169233 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\n2021-08-04 23:03:11.169283 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-08-04 23:03:11.169294 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=TA6kpzqEegCfGTze0ClELA\n2021-08-04 23:03:11.169300 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.key\n2021-08-04 23:03:11.169371 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4001\n2021-08-04 23:03:11.169383 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-08-04 23:03:11.169390 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-08-04 23:03:11.169468 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-08-04 23:03:11.169524 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-08-04 23:03:11.169544 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.crt\n2021-08-04 23:03:11.169618 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-08-04 23:03:11.169628 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.key\n2021-08-04 23:03:11.169637 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/ca.crt\n2021-08-04 23:03:11.169651 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/ca.crt\n2021-08-04 23:03:11.169659 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.169Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.169Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.170Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.crt, key = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.170Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4001\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.170Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.170Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0d180d7f5abe9b153/data/TA6kpzqEegCfGTze0ClELA/member/snap/db\",\"took\":\"92.018µs\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.171Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"fcc2a080984a9652\",\"local-member-id\":\"88badf56b7e30f53\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.172Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"88badf56b7e30f53 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.172Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"88badf56b7e30f53 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.172Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 88badf56b7e30f53 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-08-04T23:03:11.173Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.176Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.177Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"88badf56b7e30f53\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.177Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.177Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"88badf56b7e30f53 switched to configuration voters=(9852432698371673939)\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.177Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"fcc2a080984a9652\",\"local-member-id\":\"88badf56b7e30f53\",\"added-peer-id\":\"88badf56b7e30f53\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.178Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"fcc2a080984a9652\",\"local-member-id\":\"88badf56b7e30f53\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.178Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.180Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.crt, key = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0d180d7f5abe9b153/pki/TA6kpzqEegCfGTze0ClELA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.180Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"88badf56b7e30f53\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:11.180Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.772Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"88badf56b7e30f53 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.772Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"88badf56b7e30f53 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.772Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"88badf56b7e30f53 received MsgVoteResp from 88badf56b7e30f53 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.772Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"88badf56b7e30f53 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.772Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 88badf56b7e30f53 elected leader 88badf56b7e30f53 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.773Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"88badf56b7e30f53\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]}\",\"request-path\":\"/0/members/88badf56b7e30f53/attributes\",\"cluster-id\":\"fcc2a080984a9652\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-08-04T23:03:12.774Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4001\"}\nI0804 23:03:12.816476    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:12.816591    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:03:12.816609    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:12.816819    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:12.816835    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:12.816884    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:12.816955    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:12.816965    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:12.964736    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:12.964806    4200 controller.go:557] controller loop complete\nI0804 23:03:22.966273    4200 controller.go:189] starting controller iteration\nI0804 23:03:22.966307    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:22.966676    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:22.966897    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:22.967443    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:03:22.985608    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:22.985747    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:03:22.985783    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:22.986040    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:22.986080    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:22.986170    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:22.986303    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:22.986333    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:23.562923    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:23.563016    4200 controller.go:557] controller loop complete\nI0804 23:03:33.565127    4200 controller.go:189] starting controller iteration\nI0804 23:03:33.565170    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:33.565439    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:33.565558    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:33.566126    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:03:33.576860    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:33.576945    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:03:33.576962    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:33.577149    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:33.577167    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:33.577216    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:33.577294    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:33.577309    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:34.152088    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:34.152206    4200 controller.go:557] controller loop complete\nI0804 23:03:44.153707    4200 controller.go:189] starting controller iteration\nI0804 23:03:44.153770    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:44.154126    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:44.154291    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:44.155001    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:03:44.167440    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:44.167535    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:03:44.167553    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:44.167755    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:44.167769    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:44.167819    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:44.167896    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:44.167909    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:44.739871    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:44.740007    4200 controller.go:557] controller loop complete\nI0804 23:03:54.741812    4200 controller.go:189] starting controller iteration\nI0804 23:03:54.741986    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:54.742328    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:03:54.742474    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:03:54.743026    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:03:54.755975    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:03:54.756151    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:03:54.756169    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:03:54.756317    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:54.756329    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:54.756381    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:03:54.756454    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:03:54.756466    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:03:55.325697    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:03:55.325838    4200 controller.go:557] controller loop complete\nI0804 23:03:55.844260    4200 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:03:56.066976    4200 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:03:56.103149    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:03:56.103230    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:05.327484    4200 controller.go:189] starting controller iteration\nI0804 23:04:05.327526    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:05.327796    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:04:05.327935    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:05.328294    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:04:05.339008    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:05.339088    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:04:05.339258    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:05.339444    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:05.339462    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:05.339516    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:05.339593    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:05.339606    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:04:05.913567    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:05.913697    4200 controller.go:557] controller loop complete\nI0804 23:04:15.915538    4200 controller.go:189] starting controller iteration\nI0804 23:04:15.915575    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:15.915884    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:04:15.915992    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:15.916566    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:04:15.937355    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:15.937437    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:04:15.937597    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:15.937780    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:15.937796    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:15.937844    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:15.937919    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:15.937930    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:04:16.507773    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:16.507842    4200 controller.go:557] controller loop complete\nI0804 23:04:26.509713    4200 controller.go:189] starting controller iteration\nI0804 23:04:26.509751    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:26.510008    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:04:26.510160    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:26.510471    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:04:26.522440    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:26.522523    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:04:26.522540    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:26.522741    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:26.522759    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:26.522818    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:26.522900    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:26.522917    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:04:27.088125    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:27.088310    4200 controller.go:557] controller loop complete\nI0804 23:04:37.090467    4200 controller.go:189] starting controller iteration\nI0804 23:04:37.090506    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:37.090774    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:04:37.090900    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:37.091364    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:04:37.103246    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:37.103387    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:04:37.103421    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:37.103665    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:37.103714    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:37.103796    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:37.103922    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:37.103963    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:04:37.672971    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:37.673043    4200 controller.go:557] controller loop complete\nI0804 23:04:47.674649    4200 controller.go:189] starting controller iteration\nI0804 23:04:47.674690    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:47.674915    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:04:47.675035    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:47.675456    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:04:47.687096    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:47.687340    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:04:47.687364    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:47.687535    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:47.687608    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:47.687669    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:47.687757    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:47.687783    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:04:48.253971    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:48.254045    4200 controller.go:557] controller loop complete\nI0804 23:04:56.103477    4200 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:04:56.219676    4200 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:04:56.299128    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:56.299208    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:58.255736    4200 controller.go:189] starting controller iteration\nI0804 23:04:58.255772    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:58.255995    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:04:58.256109    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:04:58.256963    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:04:58.272006    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:04:58.272099    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:04:58.272120    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:04:58.272292    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:58.272307    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:04:58.272351    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:04:58.272411    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:04:58.272425    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:04:58.839749    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:04:58.839825    4200 controller.go:557] controller loop complete\nI0804 23:05:08.841988    4200 controller.go:189] starting controller iteration\nI0804 23:05:08.842026    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:08.842330    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:05:08.842492    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:08.843042    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:05:08.854146    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:08.854232    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:05:08.854377    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:08.854602    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:08.854616    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:08.854670    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:08.854747    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:08.854759    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:05:09.442564    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:09.442720    4200 controller.go:557] controller loop complete\nI0804 23:05:19.444031    4200 controller.go:189] starting controller iteration\nI0804 23:05:19.444147    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:19.444436    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:05:19.444551    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:19.445455    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:05:19.459715    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:19.459792    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:05:19.459838    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:19.460100    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:19.460115    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:19.460191    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:19.460300    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:19.460314    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:05:20.028925    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:20.029058    4200 controller.go:557] controller loop complete\nI0804 23:05:30.030540    4200 controller.go:189] starting controller iteration\nI0804 23:05:30.030579    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:30.030850    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:05:30.030994    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:30.031446    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:05:30.042412    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:30.042498    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:05:30.042514    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:30.042698    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:30.042712    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:30.042764    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:30.042836    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:30.042847    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:05:30.616245    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:30.616316    4200 controller.go:557] controller loop complete\nI0804 23:05:40.617559    4200 controller.go:189] starting controller iteration\nI0804 23:05:40.617599    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:40.617907    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:05:40.618051    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:40.618922    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:05:40.632609    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:40.632738    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:05:40.632903    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:40.633109    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:40.633125    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:40.633175    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:40.633254    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:40.633265    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:05:41.218605    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:41.218677    4200 controller.go:557] controller loop complete\nI0804 23:05:51.219889    4200 controller.go:189] starting controller iteration\nI0804 23:05:51.219928    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:51.220198    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:05:51.220345    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:05:51.220783    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:05:51.231742    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:05:51.231832    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:05:51.231849    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:05:51.231993    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:51.232008    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:51.232060    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:05:51.232136    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:05:51.232148    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:05:51.798511    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:05:51.798587    4200 controller.go:557] controller loop complete\nI0804 23:05:56.300367    4200 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0804 23:05:56.418351    4200 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0804 23:05:56.485470    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:05:56.485554    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:01.799714    4200 controller.go:189] starting controller iteration\nI0804 23:06:01.799761    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:01.800029    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:06:01.800170    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:01.800644    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:06:01.811707    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:01.811817    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:06:01.811839    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:01.812029    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:01.812045    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:01.812101    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:01.812178    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:01.812192    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:06:02.374883    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:02.374959    4200 controller.go:557] controller loop complete\nI0804 23:06:12.377143    4200 controller.go:189] starting controller iteration\nI0804 23:06:12.377307    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:12.377580    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:06:12.377704    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:12.378187    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:06:12.392303    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:12.392538    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:06:12.392575    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:12.392812    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:12.392830    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:12.392881    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:12.392973    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:12.392985    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:06:12.959339    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:12.959414    4200 controller.go:557] controller loop complete\nI0804 23:06:22.961101    4200 controller.go:189] starting controller iteration\nI0804 23:06:22.961142    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:22.961402    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:06:22.961544    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:22.961886    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:06:22.973131    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:22.973219    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:06:22.973234    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:22.973433    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:22.973451    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:22.973506    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:22.973588    4200 commands.go:38] not refreshing commands - TTL not hit\nI0804 23:06:22.973604    4200 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-489af555f9-bbb74.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0804 23:06:23.539560    4200 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0804 23:06:23.539633    4200 controller.go:557] controller loop complete\nI0804 23:06:33.540800    4200 controller.go:189] starting controller iteration\nI0804 23:06:33.540841    4200 controller.go:266] Broadcasting leadership assertion with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:33.541191    4200 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > leadership_token:\"YBoi07m4jsmpRzCrHBu26A\" healthy:<id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" > > \nI0804 23:06:33.541371    4200 controller.go:295] I am leader with token \"YBoi07m4jsmpRzCrHBu26A\"\nI0804 23:06:33.542277    4200 controller.go:705] base client OK for etcd for client urls [https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001]\nI0804 23:06:33.554888    4200 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.63.249:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"TA6kpzqEegCfGTze0ClELA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0804 23:06:33.555000    4200 controller.go:303] etcd cluster members: map[9852432698371673939:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:4001\"],\"ID\":\"9852432698371673939\"}]\nI0804 23:06:33.555017    4200 controller.go:641] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io\" addresses:\"172.20.63.249\" > \nI0804 23:06:33.555301    4200 etcdserver.go:248] updating hosts: map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:33.555353    4200 hosts.go:84] hosts update: primary=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io:[172.20.63.249 172.20.63.249]], final=map[172.20.63.249:[etcd-a.internal.e2e-489af555f9-bbb74.test-cncf-aws.k8s.io]]\nI0804 23:06:33.555438    4200 hosts.go:181] skipping update of unchanged /etc/hosts\nI0804 23:06:33.555571    4200 commands.go: