This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-10 05:32
Elapsed31m38s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0910 05:32:55.598984    4128 http.go:37] curl https://storage.googleapis.com/kops-ci/markers/release-1.22/latest-ci-updown-green.txt
I0910 05:32:55.624210    4128 http.go:37] curl https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.22.0-beta.2+v1.22.0-beta.1-38-g85f98ed240/linux/amd64/kops
I0910 05:32:57.109128    4128 up.go:43] Cleaning up any leaked resources from previous cluster
I0910 05:32:57.109177    4128 dumplogs.go:38] /logs/artifacts/5369616d-11f8-11ec-8593-5a8f5123e079/kops toolbox dump --name e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0910 05:32:57.127106    4151 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0910 05:32:57.127241    4151 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io" not found
W0910 05:32:57.620583    4128 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0910 05:32:57.620628    4128 down.go:48] /logs/artifacts/5369616d-11f8-11ec-8593-5a8f5123e079/kops delete cluster --name e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --yes
I0910 05:32:57.637926    4162 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0910 05:32:57.638790    4162 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io" not found
I0910 05:32:58.326471    4128 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/10 05:32:58 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0910 05:32:58.335141    4128 http.go:37] curl https://ip.jsb.workers.dev
I0910 05:32:58.421478    4128 up.go:144] /logs/artifacts/5369616d-11f8-11ec-8593-5a8f5123e079/kops create cluster --name e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.4 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210825 --channel=alpha --networking=kubenet --container-runtime=containerd --admin-access 34.70.122.141/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I0910 05:32:58.440707    4171 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0910 05:32:58.440835    4171 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0910 05:32:58.469775    4171 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0910 05:32:59.086180    4171 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 41 lines ...

I0910 05:33:23.739040    4128 up.go:181] /logs/artifacts/5369616d-11f8-11ec-8593-5a8f5123e079/kops validate cluster --name e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0910 05:33:23.759582    4188 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0910 05:33:23.759747    4188 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io

W0910 05:33:24.910025    4188 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0910 05:33:34.947953    4188 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:33:44.983417    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:33:55.015445    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
W0910 05:34:05.057754    4188 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:34:15.091657    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:34:25.130158    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:34:35.205978    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:34:45.387618    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:34:55.418834    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:35:05.452970    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:35:15.514056    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:35:25.551103    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:35:35.581642    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:35:45.628495    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:35:55.658882    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:36:05.713164    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:36:15.787051    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:36:25.839609    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:36:35.875841    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0910 05:36:46.277812    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 11 lines ...
Node	ip-172-20-37-76.us-west-2.compute.internal				node "ip-172-20-37-76.us-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-26bxp					system-cluster-critical pod "coredns-5dc785954d-26bxp" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-d4f4h				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-d4f4h" is pending
Pod	kube-system/kube-proxy-ip-172-20-34-221.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-34-221.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-56-165.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-56-165.us-west-2.compute.internal" is pending

Validation Failed
W0910 05:36:58.436656    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-129.us-west-2.compute.internal	master "ip-172-20-37-129.us-west-2.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-5dc785954d-26bxp		system-cluster-critical pod "coredns-5dc785954d-26bxp" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-d4f4h	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-d4f4h" is pending

Validation Failed
W0910 05:37:09.926273    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 6 lines ...
ip-172-20-56-165.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-129.us-west-2.compute.internal	master "ip-172-20-37-129.us-west-2.compute.internal" is missing kube-controller-manager pod

Validation Failed
W0910 05:37:21.401051    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 6 lines ...
ip-172-20-56-165.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-129.us-west-2.compute.internal	master "ip-172-20-37-129.us-west-2.compute.internal" is missing kube-controller-manager pod

Validation Failed
W0910 05:37:32.818690    4188 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	c5.large	1	1	us-west-2a
nodes-us-west-2a	Node	t3.medium	4	4	us-west-2a

... skipping 299 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 325 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 327 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:39:54.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:39:54.817: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:39:55.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7855" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:39:55.923: INFO: Only supported for providers [vsphere] (not aws)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:39:57.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8911" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:39:58.156: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
W0910 05:39:54.669083    4908 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 10 05:39:54.669: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 10 05:39:54.874: INFO: Waiting up to 5m0s for pod "security-context-8c355c39-fd22-4113-8240-f56c80dec4ad" in namespace "security-context-914" to be "Succeeded or Failed"
Sep 10 05:39:54.937: INFO: Pod "security-context-8c355c39-fd22-4113-8240-f56c80dec4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 63.144297ms
Sep 10 05:39:57.001: INFO: Pod "security-context-8c355c39-fd22-4113-8240-f56c80dec4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127207981s
Sep 10 05:39:59.065: INFO: Pod "security-context-8c355c39-fd22-4113-8240-f56c80dec4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191221624s
STEP: Saw pod success
Sep 10 05:39:59.065: INFO: Pod "security-context-8c355c39-fd22-4113-8240-f56c80dec4ad" satisfied condition "Succeeded or Failed"
Sep 10 05:39:59.129: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod security-context-8c355c39-fd22-4113-8240-f56c80dec4ad container test-container: <nil>
STEP: delete the pod
Sep 10 05:39:59.306: INFO: Waiting for pod security-context-8c355c39-fd22-4113-8240-f56c80dec4ad to disappear
Sep 10 05:39:59.370: INFO: Pod security-context-8c355c39-fd22-4113-8240-f56c80dec4ad no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.184 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:39:59.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-4512" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":2,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Sep 10 05:39:56.015: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-7154fd40-acc5-4756-b0d0-7ff63c5d0aa4
STEP: Creating a pod to test consume configMaps
Sep 10 05:39:56.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45" in namespace "configmap-5315" to be "Succeeded or Failed"
Sep 10 05:39:56.348: INFO: Pod "pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45": Phase="Pending", Reason="", readiness=false. Elapsed: 63.12145ms
Sep 10 05:39:58.416: INFO: Pod "pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131069189s
Sep 10 05:40:00.485: INFO: Pod "pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200214327s
STEP: Saw pod success
Sep 10 05:40:00.485: INFO: Pod "pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45" satisfied condition "Succeeded or Failed"
Sep 10 05:40:00.548: INFO: Trying to get logs from node ip-172-20-34-221.us-west-2.compute.internal pod pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45 container agnhost-container: <nil>
STEP: delete the pod
Sep 10 05:40:01.387: INFO: Waiting for pod pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45 to disappear
Sep 10 05:40:01.451: INFO: Pod pod-configmaps-94080672-4714-41b4-99f4-c2940a236c45 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.112 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0910 05:39:54.683018    4851 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 10 05:39:54.683: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 10 05:39:54.872: INFO: Waiting up to 5m0s for pod "pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d" in namespace "emptydir-7902" to be "Succeeded or Failed"
Sep 10 05:39:54.932: INFO: Pod "pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d": Phase="Pending", Reason="", readiness=false. Elapsed: 59.521405ms
Sep 10 05:39:56.994: INFO: Pod "pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121573621s
Sep 10 05:39:59.055: INFO: Pod "pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182887489s
Sep 10 05:40:01.119: INFO: Pod "pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246843379s
STEP: Saw pod success
Sep 10 05:40:01.119: INFO: Pod "pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d" satisfied condition "Succeeded or Failed"
Sep 10 05:40:01.182: INFO: Trying to get logs from node ip-172-20-34-221.us-west-2.compute.internal pod pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d container test-container: <nil>
STEP: delete the pod
Sep 10 05:40:02.208: INFO: Waiting for pod pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d to disappear
Sep 10 05:40:02.268: INFO: Pod pod-4792a81b-3d0e-4a17-a0e2-d98ab330bf7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.017 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:02.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3312" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:03.133: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 161 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 10 05:39:54.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39" in namespace "downward-api-6132" to be "Succeeded or Failed"
Sep 10 05:39:54.945: INFO: Pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39": Phase="Pending", Reason="", readiness=false. Elapsed: 61.716457ms
Sep 10 05:39:57.011: INFO: Pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127938452s
Sep 10 05:39:59.074: INFO: Pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190347374s
Sep 10 05:40:01.137: INFO: Pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253295658s
Sep 10 05:40:03.204: INFO: Pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.320103227s
STEP: Saw pod success
Sep 10 05:40:03.204: INFO: Pod "downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39" satisfied condition "Succeeded or Failed"
Sep 10 05:40:03.270: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39 container client-container: <nil>
STEP: delete the pod
Sep 10 05:40:03.417: INFO: Waiting for pod downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39 to disappear
Sep 10 05:40:03.479: INFO: Pod downwardapi-volume-a1c140fb-1f1b-426b-807a-dec82d890b39 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.248 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:03.686: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 78 lines ...
• [SLOW TEST:9.475 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:03.869: INFO: Only supported for providers [azure] (not aws)
... skipping 95 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-e799a6d2-37a9-44dc-af31-6b84df04bd78
STEP: Creating a pod to test consume secrets
Sep 10 05:39:58.636: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38" in namespace "projected-3857" to be "Succeeded or Failed"
Sep 10 05:39:58.701: INFO: Pod "pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38": Phase="Pending", Reason="", readiness=false. Elapsed: 65.458554ms
Sep 10 05:40:00.767: INFO: Pod "pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131626044s
Sep 10 05:40:02.834: INFO: Pod "pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198341325s
Sep 10 05:40:04.916: INFO: Pod "pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.280119032s
STEP: Saw pod success
Sep 10 05:40:04.916: INFO: Pod "pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38" satisfied condition "Succeeded or Failed"
Sep 10 05:40:04.985: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 10 05:40:05.126: INFO: Waiting for pod pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38 to disappear
Sep 10 05:40:05.192: INFO: Pod pod-projected-secrets-11127895-1825-4983-ae21-df42cdf47e38 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.162 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:05.339: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 100 lines ...
Sep 10 05:39:58.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 10 05:40:00.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 10 05:40:02.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849197, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 10 05:40:05.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:05.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6916" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:11.181 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:06.111: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-795064f6-cdc0-4555-ae82-b7f2878172cd
STEP: Creating a pod to test consume configMaps
Sep 10 05:40:03.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e" in namespace "projected-327" to be "Succeeded or Failed"
Sep 10 05:40:03.851: INFO: Pod "pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e": Phase="Pending", Reason="", readiness=false. Elapsed: 59.938302ms
Sep 10 05:40:05.914: INFO: Pod "pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.123047807s
STEP: Saw pod success
Sep 10 05:40:05.914: INFO: Pod "pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e" satisfied condition "Succeeded or Failed"
Sep 10 05:40:05.975: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 10 05:40:06.101: INFO: Waiting for pod pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e to disappear
Sep 10 05:40:06.160: INFO: Pod pod-projected-configmaps-2a258895-ae9e-40f0-b04e-e21a9bb2653e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 38 lines ...
• [SLOW TEST:12.730 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:07.281: INFO: Only supported for providers [gce gke] (not aws)
... skipping 25 lines ...
W0910 05:39:54.660438    4921 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 10 05:39:54.660: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 10 05:39:54.873: INFO: Waiting up to 5m0s for pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4" in namespace "containers-7931" to be "Succeeded or Failed"
Sep 10 05:39:54.936: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Pending", Reason="", readiness=false. Elapsed: 63.218341ms
Sep 10 05:39:57.003: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129622148s
Sep 10 05:39:59.067: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194112383s
Sep 10 05:40:01.130: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25735961s
Sep 10 05:40:03.198: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324657311s
Sep 10 05:40:05.261: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388158172s
Sep 10 05:40:07.326: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.452451778s
STEP: Saw pod success
Sep 10 05:40:07.326: INFO: Pod "client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4" satisfied condition "Succeeded or Failed"
Sep 10 05:40:07.391: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4 container agnhost-container: <nil>
STEP: delete the pod
Sep 10 05:40:07.534: INFO: Waiting for pod client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4 to disappear
Sep 10 05:40:07.603: INFO: Pod client-containers-051105ec-29d1-42f2-a9aa-ab91359671a4 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.416 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:08.323: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:06.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 10 05:40:06.657: INFO: Waiting up to 5m0s for pod "client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48" in namespace "containers-2151" to be "Succeeded or Failed"
Sep 10 05:40:06.720: INFO: Pod "client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48": Phase="Pending", Reason="", readiness=false. Elapsed: 63.511108ms
Sep 10 05:40:08.786: INFO: Pod "client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.129368282s
STEP: Saw pod success
Sep 10 05:40:08.786: INFO: Pod "client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48" satisfied condition "Succeeded or Failed"
Sep 10 05:40:08.858: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48 container agnhost-container: <nil>
STEP: delete the pod
Sep 10 05:40:09.028: INFO: Waiting for pod client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48 to disappear
Sep 10 05:40:09.095: INFO: Pod client-containers-8b4ac5e1-0ead-4480-b2a1-10c3ce0c6b48 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:09.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2151" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:04.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Sep 10 05:40:04.901: INFO: Waiting up to 5m0s for pod "var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11" in namespace "var-expansion-974" to be "Succeeded or Failed"
Sep 10 05:40:04.968: INFO: Pod "var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11": Phase="Pending", Reason="", readiness=false. Elapsed: 67.817273ms
Sep 10 05:40:07.032: INFO: Pod "var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131565501s
Sep 10 05:40:09.102: INFO: Pod "var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200942981s
STEP: Saw pod success
Sep 10 05:40:09.102: INFO: Pod "var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11" satisfied condition "Succeeded or Failed"
Sep 10 05:40:09.184: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11 container dapi-container: <nil>
STEP: delete the pod
Sep 10 05:40:09.324: INFO: Waiting for pod var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11 to disappear
Sep 10 05:40:09.391: INFO: Pod var-expansion-51be246c-ce33-4207-b6d0-98fca6bcde11 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.023 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:13.760 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:09.708: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
• [SLOW TEST:6.455 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:10.206: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:11.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-4158" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:16.953 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:11.517: INFO: Only supported for providers [gce gke] (not aws)
... skipping 67 lines ...
Sep 10 05:40:07.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 10 05:40:07.729: INFO: Waiting up to 5m0s for pod "security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad" in namespace "security-context-6895" to be "Succeeded or Failed"
Sep 10 05:40:07.795: INFO: Pod "security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 66.577906ms
Sep 10 05:40:09.861: INFO: Pod "security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13198086s
Sep 10 05:40:11.927: INFO: Pod "security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198472473s
STEP: Saw pod success
Sep 10 05:40:11.927: INFO: Pod "security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad" satisfied condition "Succeeded or Failed"
Sep 10 05:40:11.993: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad container test-container: <nil>
STEP: delete the pod
Sep 10 05:40:12.128: INFO: Waiting for pod security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad to disappear
Sep 10 05:40:12.193: INFO: Pod security-context-42ab4914-7653-46a2-b688-54f85ec1b1ad no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.024 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:12.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9896" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:12.964: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 244 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:13.042: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 110 lines ...
STEP: Destroying namespace "services-9945" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:13.355: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:13.356: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 223 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 103 lines ...
• [SLOW TEST:13.842 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:14.441: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
• [SLOW TEST:5.618 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Sep 10 05:40:04.146: INFO: PersistentVolumeClaim pvc-jgzmt found but phase is Pending instead of Bound.
Sep 10 05:40:06.212: INFO: PersistentVolumeClaim pvc-jgzmt found and phase=Bound (2.135339568s)
Sep 10 05:40:06.212: INFO: Waiting up to 3m0s for PersistentVolume local-xz2qb to have phase Bound
Sep 10 05:40:06.276: INFO: PersistentVolume local-xz2qb found and phase=Bound (64.279849ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rhvl
STEP: Creating a pod to test subpath
Sep 10 05:40:06.470: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rhvl" in namespace "provisioning-5118" to be "Succeeded or Failed"
Sep 10 05:40:06.534: INFO: Pod "pod-subpath-test-preprovisionedpv-rhvl": Phase="Pending", Reason="", readiness=false. Elapsed: 63.751158ms
Sep 10 05:40:08.600: INFO: Pod "pod-subpath-test-preprovisionedpv-rhvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130058368s
Sep 10 05:40:10.672: INFO: Pod "pod-subpath-test-preprovisionedpv-rhvl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202094848s
Sep 10 05:40:12.777: INFO: Pod "pod-subpath-test-preprovisionedpv-rhvl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30725214s
Sep 10 05:40:14.865: INFO: Pod "pod-subpath-test-preprovisionedpv-rhvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.394656685s
STEP: Saw pod success
Sep 10 05:40:14.865: INFO: Pod "pod-subpath-test-preprovisionedpv-rhvl" satisfied condition "Succeeded or Failed"
Sep 10 05:40:14.933: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-rhvl container test-container-subpath-preprovisionedpv-rhvl: <nil>
STEP: delete the pod
Sep 10 05:40:15.134: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rhvl to disappear
Sep 10 05:40:15.209: INFO: Pod pod-subpath-test-preprovisionedpv-rhvl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rhvl
Sep 10 05:40:15.209: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rhvl" in namespace "provisioning-5118"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 10 05:40:11.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b" in namespace "downward-api-5116" to be "Succeeded or Failed"
Sep 10 05:40:11.780: INFO: Pod "downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b": Phase="Pending", Reason="", readiness=false. Elapsed: 63.353179ms
Sep 10 05:40:13.847: INFO: Pod "downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130798102s
Sep 10 05:40:15.914: INFO: Pod "downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197577285s
STEP: Saw pod success
Sep 10 05:40:15.914: INFO: Pod "downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b" satisfied condition "Succeeded or Failed"
Sep 10 05:40:15.977: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b container client-container: <nil>
STEP: delete the pod
Sep 10 05:40:16.111: INFO: Waiting for pod downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b to disappear
Sep 10 05:40:16.176: INFO: Pod downwardapi-volume-a054bb28-ca91-4446-a81c-7bf79588e57b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:16.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5116" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:16.334: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 16 lines ...
Sep 10 05:40:15.948: INFO: Creating a PV followed by a PVC
Sep 10 05:40:16.075: INFO: Waiting for PV local-pv85tgb to bind to PVC pvc-w8ljh
Sep 10 05:40:16.075: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-w8ljh] to have phase Bound
Sep 10 05:40:16.138: INFO: PersistentVolumeClaim pvc-w8ljh found and phase=Bound (62.457524ms)
Sep 10 05:40:16.138: INFO: Waiting up to 3m0s for PersistentVolume local-pv85tgb to have phase Bound
Sep 10 05:40:16.200: INFO: PersistentVolume local-pv85tgb found and phase=Bound (61.831922ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
STEP: Initializing test volumes
Sep 10 05:40:16.334: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8e205d79-9f3a-4b4a-9118-a2ccdd228c8f] Namespace:persistent-local-volumes-test-9250 PodName:hostexec-ip-172-20-56-165.us-west-2.compute.internal-2l2rm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 10 05:40:16.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:8.045 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:13.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 10 05:40:13.779: INFO: Waiting up to 5m0s for pod "pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff" in namespace "emptydir-3604" to be "Succeeded or Failed"
Sep 10 05:40:13.845: INFO: Pod "pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 66.125532ms
Sep 10 05:40:15.904: INFO: Pod "pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125438412s
Sep 10 05:40:17.965: INFO: Pod "pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186478686s
STEP: Saw pod success
Sep 10 05:40:17.965: INFO: Pod "pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff" satisfied condition "Succeeded or Failed"
Sep 10 05:40:18.027: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff container test-container: <nil>
STEP: delete the pod
Sep 10 05:40:18.180: INFO: Waiting for pod pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff to disappear
Sep 10 05:40:18.239: INFO: Pod pod-f3c333ff-0234-4d2a-aff0-29debb90d6ff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:18.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3604" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:18.390: INFO: Only supported for providers [azure] (not aws)
... skipping 148 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:28.257 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:22.753: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-5011/secret-test-0a7c82ed-3c1f-4cd5-801e-26fde36f2975
STEP: Creating a pod to test consume secrets
Sep 10 05:40:16.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d" in namespace "secrets-5011" to be "Succeeded or Failed"
Sep 10 05:40:16.808: INFO: Pod "pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d": Phase="Pending", Reason="", readiness=false. Elapsed: 69.170897ms
Sep 10 05:40:18.873: INFO: Pod "pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134837067s
Sep 10 05:40:20.938: INFO: Pod "pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199511589s
Sep 10 05:40:23.003: INFO: Pod "pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264525157s
STEP: Saw pod success
Sep 10 05:40:23.003: INFO: Pod "pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d" satisfied condition "Succeeded or Failed"
Sep 10 05:40:23.067: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d container env-test: <nil>
STEP: delete the pod
Sep 10 05:40:23.201: INFO: Waiting for pod pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d to disappear
Sep 10 05:40:23.265: INFO: Pod pod-configmaps-f74b135e-379e-4d20-82e7-98c9cacf110d no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.172 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:23.415: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:24.958: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 171 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:25.299: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-09250cfb-a8e0-4f01-8a0a-5dc1e20adb32
STEP: Creating a pod to test consume configMaps
Sep 10 05:40:18.871: INFO: Waiting up to 5m0s for pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090" in namespace "configmap-6703" to be "Succeeded or Failed"
Sep 10 05:40:18.931: INFO: Pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090": Phase="Pending", Reason="", readiness=false. Elapsed: 59.443593ms
Sep 10 05:40:20.990: INFO: Pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119202396s
Sep 10 05:40:23.050: INFO: Pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178821599s
Sep 10 05:40:25.110: INFO: Pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239093922s
Sep 10 05:40:27.171: INFO: Pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.299857569s
STEP: Saw pod success
Sep 10 05:40:27.171: INFO: Pod "pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090" satisfied condition "Succeeded or Failed"
Sep 10 05:40:27.231: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090 container configmap-volume-test: <nil>
STEP: delete the pod
Sep 10 05:40:27.374: INFO: Waiting for pod pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090 to disappear
Sep 10 05:40:27.433: INFO: Pod pod-configmaps-58a64edf-f2b4-481f-8a3b-062fa3b5d090 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 133 lines ...
• [SLOW TEST:36.449 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:30.938: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep 10 05:39:55.040: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 10 05:39:55.164: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-g7t9
STEP: Creating a pod to test atomic-volume-subpath
Sep 10 05:39:55.229: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-g7t9" in namespace "provisioning-6677" to be "Succeeded or Failed"
Sep 10 05:39:55.291: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Pending", Reason="", readiness=false. Elapsed: 62.64751ms
Sep 10 05:39:57.362: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13310799s
Sep 10 05:39:59.425: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196213179s
Sep 10 05:40:01.488: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259399886s
Sep 10 05:40:03.555: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326641875s
Sep 10 05:40:05.625: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.39606153s
... skipping 7 lines ...
Sep 10 05:40:22.141: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Running", Reason="", readiness=true. Elapsed: 26.912652725s
Sep 10 05:40:24.204: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Running", Reason="", readiness=true. Elapsed: 28.975685444s
Sep 10 05:40:26.268: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Running", Reason="", readiness=true. Elapsed: 31.03973611s
Sep 10 05:40:28.336: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Running", Reason="", readiness=true. Elapsed: 33.107606s
Sep 10 05:40:30.399: INFO: Pod "pod-subpath-test-inlinevolume-g7t9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.17063485s
STEP: Saw pod success
Sep 10 05:40:30.399: INFO: Pod "pod-subpath-test-inlinevolume-g7t9" satisfied condition "Succeeded or Failed"
Sep 10 05:40:30.463: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-subpath-test-inlinevolume-g7t9 container test-container-subpath-inlinevolume-g7t9: <nil>
STEP: delete the pod
Sep 10 05:40:30.599: INFO: Waiting for pod pod-subpath-test-inlinevolume-g7t9 to disappear
Sep 10 05:40:30.661: INFO: Pod pod-subpath-test-inlinevolume-g7t9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-g7t9
Sep 10 05:40:30.661: INFO: Deleting pod "pod-subpath-test-inlinevolume-g7t9" in namespace "provisioning-6677"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:30.999: INFO: Only supported for providers [vsphere] (not aws)
... skipping 84 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":4,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:34.395: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Sep 10 05:40:23.862: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6710" to be "Succeeded or Failed"
Sep 10 05:40:23.927: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 65.025829ms
Sep 10 05:40:25.993: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130683056s
Sep 10 05:40:28.058: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195704043s
Sep 10 05:40:30.125: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262339407s
Sep 10 05:40:32.189: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326825357s
Sep 10 05:40:34.255: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392706772s
STEP: Saw pod success
Sep 10 05:40:34.255: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 10 05:40:34.321: INFO: Trying to get logs from node ip-172-20-34-221.us-west-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 10 05:40:34.462: INFO: Waiting for pod pod-host-path-test to disappear
Sep 10 05:40:34.526: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.184 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:34.702: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 124 lines ...
• [SLOW TEST:9.886 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:34.963: INFO: Only supported for providers [azure] (not aws)
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Sep 10 05:40:03.372: INFO: PersistentVolumeClaim pvc-kxd8v found but phase is Pending instead of Bound.
Sep 10 05:40:05.434: INFO: PersistentVolumeClaim pvc-kxd8v found and phase=Bound (4.189340759s)
Sep 10 05:40:05.434: INFO: Waiting up to 3m0s for PersistentVolume local-hhlrb to have phase Bound
Sep 10 05:40:05.499: INFO: PersistentVolume local-hhlrb found and phase=Bound (64.030957ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wrjx
STEP: Creating a pod to test atomic-volume-subpath
Sep 10 05:40:05.690: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wrjx" in namespace "provisioning-3536" to be "Succeeded or Failed"
Sep 10 05:40:05.755: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 65.420966ms
Sep 10 05:40:07.819: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128790403s
Sep 10 05:40:09.887: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197371437s
Sep 10 05:40:11.950: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260263064s
Sep 10 05:40:14.014: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Running", Reason="", readiness=true. Elapsed: 8.324156502s
Sep 10 05:40:16.079: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.389169554s
... skipping 4 lines ...
Sep 10 05:40:26.413: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.723216882s
Sep 10 05:40:28.478: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Running", Reason="", readiness=true. Elapsed: 22.788387433s
Sep 10 05:40:30.544: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Running", Reason="", readiness=true. Elapsed: 24.854173901s
Sep 10 05:40:32.608: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Running", Reason="", readiness=true. Elapsed: 26.918358975s
Sep 10 05:40:34.673: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.983531521s
STEP: Saw pod success
Sep 10 05:40:34.674: INFO: Pod "pod-subpath-test-preprovisionedpv-wrjx" satisfied condition "Succeeded or Failed"
Sep 10 05:40:34.743: INFO: Trying to get logs from node ip-172-20-34-221.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-wrjx container test-container-subpath-preprovisionedpv-wrjx: <nil>
STEP: delete the pod
Sep 10 05:40:34.883: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wrjx to disappear
Sep 10 05:40:34.947: INFO: Pod pod-subpath-test-preprovisionedpv-wrjx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wrjx
Sep 10 05:40:34.947: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wrjx" in namespace "provisioning-3536"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:36.484: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:37.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-4272" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:37.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:38.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-5773" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:39.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4568" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:27.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
Sep 10 05:40:38.236: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 10 05:40:38.236: INFO: Running '/tmp/kubectl2591606010/kubectl --server=https://api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-154 describe pod agnhost-primary-wthg4'
Sep 10 05:40:38.648: INFO: stderr: ""
Sep 10 05:40:38.648: INFO: stdout: "Name:         agnhost-primary-wthg4\nNamespace:    kubectl-154\nPriority:     0\nNode:         ip-172-20-34-221.us-west-2.compute.internal/172.20.34.221\nStart Time:   Fri, 10 Sep 2021 05:40:28 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.3.19\nIPs:\n  IP:           100.96.3.19\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://814ce755abc9502f585b621880ee5f0b57f8eace42178f9094f6118897590f63\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 10 Sep 2021 05:40:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-klsds (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-klsds:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  10s   default-scheduler  Successfully assigned kubectl-154/agnhost-primary-wthg4 to ip-172-20-34-221.us-west-2.compute.internal\n  Normal  Pulled     7s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    7s    kubelet            Created container agnhost-primary\n  Normal  Started    7s    kubelet            Started container agnhost-primary\n"
Sep 10 05:40:38.648: INFO: Running '/tmp/kubectl2591606010/kubectl --server=https://api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-154 describe rc agnhost-primary'
Sep 10 05:40:39.148: INFO: stderr: ""
Sep 10 05:40:39.148: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-154\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: agnhost-primary-wthg4\n"
Sep 10 05:40:39.148: INFO: Running '/tmp/kubectl2591606010/kubectl --server=https://api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-154 describe service agnhost-primary'
Sep 10 05:40:39.631: INFO: stderr: ""
Sep 10 05:40:39.631: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-154\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.65.125.178\nIPs:               100.65.125.178\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.3.19:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 10 05:40:39.699: INFO: Running '/tmp/kubectl2591606010/kubectl --server=https://api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-154 describe node ip-172-20-34-221.us-west-2.compute.internal'
Sep 10 05:40:40.514: INFO: stderr: ""
Sep 10 05:40:40.514: INFO: stdout: "Name:               ip-172-20-34-221.us-west-2.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-west-2\n                    failure-domain.beta.kubernetes.io/zone=us-west-2a\n                    kops.k8s.io/instancegroup=nodes-us-west-2a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-34-221.us-west-2.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.kubernetes.io/region=us-west-2\n                    topology.kubernetes.io/zone=us-west-2a\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"csi-mock-csi-mock-volumes-3136\":\"csi-mock-csi-mock-volumes-3136\"}\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Sep 2021 05:36:56 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-34-221.us-west-2.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 10 Sep 2021 05:40:31 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 10 Sep 2021 05:37:00 +0000   Fri, 10 Sep 2021 05:37:00 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Fri, 10 Sep 2021 05:40:37 +0000   Fri, 10 Sep 2021 05:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 10 Sep 2021 05:40:37 +0000   Fri, 10 Sep 2021 05:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 10 Sep 2021 05:40:37 +0000   Fri, 10 Sep 2021 05:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 10 Sep 2021 05:40:37 +0000   Fri, 10 Sep 2021 05:37:06 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.34.221\n  ExternalIP:   34.220.79.145\n  Hostname:     ip-172-20-34-221.us-west-2.compute.internal\n  InternalDNS:  ip-172-20-34-221.us-west-2.compute.internal\n  ExternalDNS:  ec2-34-220-79-145.us-west-2.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           48725632Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3964584Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           44905542377\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3862184Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec2b68a792368da1210e402d517c9272\n  System UUID:                ec2b68a7-9236-8da1-210e-402d517c9272\n  Boot ID:                    133fe969-966b-4eb5-ad47-06abdfd5735a\n  Kernel Version:             5.11.0-1016-aws\n  OS Image:                   Ubuntu 20.04.3 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.9\n  Kubelet Version:            v1.21.4\n  Kube-Proxy Version:         v1.21.4\nPodCIDR:                      100.96.3.0/24\nPodCIDRs:                     100.96.3.0/24\nProviderID:                   aws:///us-west-2a/i-02fcd4692ca637b53\nNon-terminated Pods:          (14 in total)\n  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---\n  container-probe-7088        busybox-bf7544f4-a84c-4669-9197-dde685bfcb85                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s\n  csi-mock-volumes-3136-3578  csi-mockplugin-0                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  csi-mock-volumes-3136-3578  csi-mockplugin-attacher-0                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  csi-mock-volumes-3136-3578  csi-mockplugin-resizer-0                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  csi-mock-volumes-3136       pvc-volume-tester-zsc2s                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\n  init-container-5861         pod-init-bf92c211-aeb2-4dc8-a6c0-accb33afbe15                 100m (5%)     100m (5%)   0 (0%)           0 (0%)         4s\n  kube-system                 coredns-5dc785954d-xvz58                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m28s\n  kube-system                 kube-proxy-ip-172-20-34-221.us-west-2.compute.internal        100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m43s\n  kubectl-154                 agnhost-primary-wthg4                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s\n  projected-6569              annotationupdate278ce567-ca01-4781-bc4f-c8e4113d1ddb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\n  provisioning-2208           hostexec-ip-172-20-34-221.us-west-2.compute.internal-n824w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\n  proxy-3185                  proxy-service-9lpmd-9lswx                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\n  pvc-protection-9673         pvc-tester-25568                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s\n  volume-8883                 aws-injector                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests    Limits\n  --------                    --------    ------\n  cpu                         300m (15%)  100m (5%)\n  memory                      70Mi (1%)   170Mi (4%)\n  ephemeral-storage           0 (0%)      0 (0%)\n  hugepages-1Gi               0 (0%)      0 (0%)\n  hugepages-2Mi               0 (0%)      0 (0%)\n  attachable-volumes-aws-ebs  0           0\nEvents:\n  Type     Reason                   Age    From        Message\n  ----     ------                   ----   ----        -------\n  Normal   Starting                 3m44s  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      3m44s  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  3m44s  kubelet     Node ip-172-20-34-221.us-west-2.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    3m44s  kubelet     Node ip-172-20-34-221.us-west-2.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     3m44s  kubelet     Node ip-172-20-34-221.us-west-2.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  3m43s  kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                 3m41s  kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                3m34s  kubelet     Node ip-172-20-34-221.us-west-2.compute.internal status is now: NodeReady\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:21.567: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Sep 10 05:40:32.909: INFO: PersistentVolumeClaim pvc-ctkc5 found but phase is Pending instead of Bound.
Sep 10 05:40:34.973: INFO: PersistentVolumeClaim pvc-ctkc5 found and phase=Bound (4.188728962s)
Sep 10 05:40:34.973: INFO: Waiting up to 3m0s for PersistentVolume local-lrxkn to have phase Bound
Sep 10 05:40:35.034: INFO: PersistentVolume local-lrxkn found and phase=Bound (61.699192ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-kgd2
STEP: Creating a pod to test exec-volume-test
Sep 10 05:40:35.222: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-kgd2" in namespace "volume-5289" to be "Succeeded or Failed"
Sep 10 05:40:35.284: INFO: Pod "exec-volume-test-preprovisionedpv-kgd2": Phase="Pending", Reason="", readiness=false. Elapsed: 61.928061ms
Sep 10 05:40:37.347: INFO: Pod "exec-volume-test-preprovisionedpv-kgd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125146088s
Sep 10 05:40:39.409: INFO: Pod "exec-volume-test-preprovisionedpv-kgd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18716705s
STEP: Saw pod success
Sep 10 05:40:39.409: INFO: Pod "exec-volume-test-preprovisionedpv-kgd2" satisfied condition "Succeeded or Failed"
Sep 10 05:40:39.471: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-kgd2 container exec-container-preprovisionedpv-kgd2: <nil>
STEP: delete the pod
Sep 10 05:40:39.628: INFO: Waiting for pod exec-volume-test-preprovisionedpv-kgd2 to disappear
Sep 10 05:40:39.699: INFO: Pod exec-volume-test-preprovisionedpv-kgd2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-kgd2
Sep 10 05:40:39.699: INFO: Deleting pod "exec-volume-test-preprovisionedpv-kgd2" in namespace "volume-5289"
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:41.168: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 14 lines ...
      Driver "csi-hostpath" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:40.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:41.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-942" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:41.571: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 172 lines ...
Sep 10 05:40:20.106: INFO: PersistentVolumeClaim pvc-2xxcz found and phase=Bound (14.52394961s)
Sep 10 05:40:20.106: INFO: Waiting up to 3m0s for PersistentVolume nfs-ght2k to have phase Bound
Sep 10 05:40:20.168: INFO: PersistentVolume nfs-ght2k found and phase=Bound (61.768685ms)
STEP: Checking pod has write access to PersistentVolume
Sep 10 05:40:20.288: INFO: Creating nfs test pod
Sep 10 05:40:20.350: INFO: Pod should terminate with exitcode 0 (success)
Sep 10 05:40:20.350: INFO: Waiting up to 5m0s for pod "pvc-tester-sr97h" in namespace "pv-2367" to be "Succeeded or Failed"
Sep 10 05:40:20.410: INFO: Pod "pvc-tester-sr97h": Phase="Pending", Reason="", readiness=false. Elapsed: 60.038718ms
Sep 10 05:40:22.471: INFO: Pod "pvc-tester-sr97h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120661393s
Sep 10 05:40:24.532: INFO: Pod "pvc-tester-sr97h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18223106s
Sep 10 05:40:26.594: INFO: Pod "pvc-tester-sr97h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244406632s
STEP: Saw pod success
Sep 10 05:40:26.594: INFO: Pod "pvc-tester-sr97h" satisfied condition "Succeeded or Failed"
Sep 10 05:40:26.594: INFO: Pod pvc-tester-sr97h succeeded 
Sep 10 05:40:26.594: INFO: Deleting pod "pvc-tester-sr97h" in namespace "pv-2367"
Sep 10 05:40:26.659: INFO: Wait up to 5m0s for pod "pvc-tester-sr97h" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 10 05:40:26.723: INFO: Deleting PVC pvc-2xxcz to trigger reclamation of PV 
Sep 10 05:40:26.723: INFO: Deleting PersistentVolumeClaim "pvc-2xxcz"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:43.440: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 14 lines ...
      Driver hostPath doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:18.313: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Sep 10 05:40:33.852: INFO: PersistentVolumeClaim pvc-c727t found but phase is Pending instead of Bound.
Sep 10 05:40:35.915: INFO: PersistentVolumeClaim pvc-c727t found and phase=Bound (8.318509292s)
Sep 10 05:40:35.916: INFO: Waiting up to 3m0s for PersistentVolume local-lzj76 to have phase Bound
Sep 10 05:40:35.982: INFO: PersistentVolume local-lzj76 found and phase=Bound (66.064291ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wqg8
STEP: Creating a pod to test subpath
Sep 10 05:40:36.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wqg8" in namespace "provisioning-7840" to be "Succeeded or Failed"
Sep 10 05:40:36.241: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8": Phase="Pending", Reason="", readiness=false. Elapsed: 62.506301ms
Sep 10 05:40:38.304: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125207599s
Sep 10 05:40:40.373: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19399193s
STEP: Saw pod success
Sep 10 05:40:40.373: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8" satisfied condition "Succeeded or Failed"
Sep 10 05:40:40.436: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-wqg8 container test-container-subpath-preprovisionedpv-wqg8: <nil>
STEP: delete the pod
Sep 10 05:40:40.582: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wqg8 to disappear
Sep 10 05:40:40.646: INFO: Pod pod-subpath-test-preprovisionedpv-wqg8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wqg8
Sep 10 05:40:40.646: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wqg8" in namespace "provisioning-7840"
STEP: Creating pod pod-subpath-test-preprovisionedpv-wqg8
STEP: Creating a pod to test subpath
Sep 10 05:40:40.771: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wqg8" in namespace "provisioning-7840" to be "Succeeded or Failed"
Sep 10 05:40:40.835: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8": Phase="Pending", Reason="", readiness=false. Elapsed: 63.415671ms
Sep 10 05:40:42.898: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.126106959s
STEP: Saw pod success
Sep 10 05:40:42.898: INFO: Pod "pod-subpath-test-preprovisionedpv-wqg8" satisfied condition "Succeeded or Failed"
Sep 10 05:40:42.959: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-wqg8 container test-container-subpath-preprovisionedpv-wqg8: <nil>
STEP: delete the pod
Sep 10 05:40:43.089: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wqg8 to disappear
Sep 10 05:40:43.151: INFO: Pod pod-subpath-test-preprovisionedpv-wqg8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wqg8
Sep 10 05:40:43.151: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wqg8" in namespace "provisioning-7840"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:44.601: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 55 lines ...
• [SLOW TEST:50.469 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:44.964: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 113 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-ae69a797-e8b7-4822-b76f-ab6fb7cf6046
STEP: Creating a pod to test consume secrets
Sep 10 05:40:41.873: INFO: Waiting up to 5m0s for pod "pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa" in namespace "secrets-7435" to be "Succeeded or Failed"
Sep 10 05:40:41.937: INFO: Pod "pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa": Phase="Pending", Reason="", readiness=false. Elapsed: 64.244239ms
Sep 10 05:40:44.003: INFO: Pod "pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130405685s
Sep 10 05:40:46.067: INFO: Pod "pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194192041s
STEP: Saw pod success
Sep 10 05:40:46.067: INFO: Pod "pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa" satisfied condition "Succeeded or Failed"
Sep 10 05:40:46.130: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa container secret-volume-test: <nil>
STEP: delete the pod
Sep 10 05:40:46.267: INFO: Waiting for pod pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa to disappear
Sep 10 05:40:46.337: INFO: Pod pod-secrets-1c011824-e021-4f25-9841-ba8c43be1efa no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:5.349 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:46.556: INFO: Only supported for providers [azure] (not aws)
... skipping 65 lines ...
• [SLOW TEST:13.564 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:48.644: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 88 lines ...
• [SLOW TEST:54.680 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:49.144: INFO: Driver local doesn't support ext3 -- skipping
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:49.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3060" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:49.757: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":5,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:52.451: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:40:52.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-2103" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":6,"skipped":45,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:53.098: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 40 lines ...
Sep 10 05:40:48.577: INFO: PersistentVolumeClaim pvc-nxppn found but phase is Pending instead of Bound.
Sep 10 05:40:50.638: INFO: PersistentVolumeClaim pvc-nxppn found and phase=Bound (6.247993832s)
Sep 10 05:40:50.638: INFO: Waiting up to 3m0s for PersistentVolume local-mklt6 to have phase Bound
Sep 10 05:40:50.696: INFO: PersistentVolume local-mklt6 found and phase=Bound (58.681014ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-bmvd
STEP: Creating a pod to test exec-volume-test
Sep 10 05:40:50.875: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-bmvd" in namespace "volume-2192" to be "Succeeded or Failed"
Sep 10 05:40:50.934: INFO: Pod "exec-volume-test-preprovisionedpv-bmvd": Phase="Pending", Reason="", readiness=false. Elapsed: 58.895833ms
Sep 10 05:40:52.995: INFO: Pod "exec-volume-test-preprovisionedpv-bmvd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.119645628s
STEP: Saw pod success
Sep 10 05:40:52.995: INFO: Pod "exec-volume-test-preprovisionedpv-bmvd" satisfied condition "Succeeded or Failed"
Sep 10 05:40:53.054: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-bmvd container exec-container-preprovisionedpv-bmvd: <nil>
STEP: delete the pod
Sep 10 05:40:53.187: INFO: Waiting for pod exec-volume-test-preprovisionedpv-bmvd to disappear
Sep 10 05:40:53.250: INFO: Pod exec-volume-test-preprovisionedpv-bmvd no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-bmvd
Sep 10 05:40:53.250: INFO: Deleting pod "exec-volume-test-preprovisionedpv-bmvd" in namespace "volume-2192"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:54.143: INFO: Only supported for providers [openstack] (not aws)
... skipping 37 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:41.815: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Sep 10 05:40:49.199: INFO: PersistentVolumeClaim pvc-69bsp found but phase is Pending instead of Bound.
Sep 10 05:40:51.264: INFO: PersistentVolumeClaim pvc-69bsp found and phase=Bound (6.264655548s)
Sep 10 05:40:51.264: INFO: Waiting up to 3m0s for PersistentVolume local-9zw6j to have phase Bound
Sep 10 05:40:51.329: INFO: PersistentVolume local-9zw6j found and phase=Bound (65.306306ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7thn
STEP: Creating a pod to test subpath
Sep 10 05:40:51.537: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7thn" in namespace "provisioning-2325" to be "Succeeded or Failed"
Sep 10 05:40:51.605: INFO: Pod "pod-subpath-test-preprovisionedpv-7thn": Phase="Pending", Reason="", readiness=false. Elapsed: 68.44544ms
Sep 10 05:40:53.700: INFO: Pod "pod-subpath-test-preprovisionedpv-7thn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163657325s
Sep 10 05:40:55.782: INFO: Pod "pod-subpath-test-preprovisionedpv-7thn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24559899s
Sep 10 05:40:57.849: INFO: Pod "pod-subpath-test-preprovisionedpv-7thn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.312390714s
STEP: Saw pod success
Sep 10 05:40:57.849: INFO: Pod "pod-subpath-test-preprovisionedpv-7thn" satisfied condition "Succeeded or Failed"
Sep 10 05:40:57.915: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-7thn container test-container-volume-preprovisionedpv-7thn: <nil>
STEP: delete the pod
Sep 10 05:40:58.056: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7thn to disappear
Sep 10 05:40:58.124: INFO: Pod pod-subpath-test-preprovisionedpv-7thn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7thn
Sep 10 05:40:58.124: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7thn" in namespace "provisioning-2325"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:40:59.070: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 138 lines ...
• [SLOW TEST:37.363 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 364 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:11.464 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:59.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-70220c3d-0559-4cca-963e-5ef44e7b283e
STEP: Creating a pod to test consume secrets
Sep 10 05:40:59.639: INFO: Waiting up to 5m0s for pod "pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54" in namespace "secrets-927" to be "Succeeded or Failed"
Sep 10 05:40:59.704: INFO: Pod "pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54": Phase="Pending", Reason="", readiness=false. Elapsed: 65.386263ms
Sep 10 05:41:01.770: INFO: Pod "pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.131190858s
STEP: Saw pod success
Sep 10 05:41:01.770: INFO: Pod "pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54" satisfied condition "Succeeded or Failed"
Sep 10 05:41:01.836: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54 container secret-env-test: <nil>
STEP: delete the pod
Sep 10 05:41:01.974: INFO: Waiting for pod pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54 to disappear
Sep 10 05:41:02.040: INFO: Pod pod-secrets-a757a23f-20b4-4f72-884e-ed985cc3dc54 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:02.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-927" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:02.211: INFO: Only supported for providers [azure] (not aws)
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:02.364: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 151 lines ...
Sep 10 05:40:48.107: INFO: PersistentVolumeClaim pvc-v7ngx found but phase is Pending instead of Bound.
Sep 10 05:40:50.172: INFO: PersistentVolumeClaim pvc-v7ngx found and phase=Bound (2.128720287s)
Sep 10 05:40:50.172: INFO: Waiting up to 3m0s for PersistentVolume local-77l2l to have phase Bound
Sep 10 05:40:50.238: INFO: PersistentVolume local-77l2l found and phase=Bound (65.377052ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hvx5
STEP: Creating a pod to test subpath
Sep 10 05:40:50.435: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hvx5" in namespace "provisioning-2208" to be "Succeeded or Failed"
Sep 10 05:40:50.499: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 64.029982ms
Sep 10 05:40:52.564: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129091416s
Sep 10 05:40:54.629: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194166254s
Sep 10 05:40:56.695: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26049822s
Sep 10 05:40:58.761: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326162758s
Sep 10 05:41:00.827: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.392462238s
Sep 10 05:41:02.892: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.45739843s
Sep 10 05:41:04.958: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.523010682s
STEP: Saw pod success
Sep 10 05:41:04.958: INFO: Pod "pod-subpath-test-preprovisionedpv-hvx5" satisfied condition "Succeeded or Failed"
Sep 10 05:41:05.022: INFO: Trying to get logs from node ip-172-20-34-221.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-hvx5 container test-container-subpath-preprovisionedpv-hvx5: <nil>
STEP: delete the pod
Sep 10 05:41:05.163: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hvx5 to disappear
Sep 10 05:41:05.228: INFO: Pod pod-subpath-test-preprovisionedpv-hvx5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hvx5
Sep 10 05:41:05.228: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hvx5" in namespace "provisioning-2208"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:06.205: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
Sep 10 05:40:31.291: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8552tfkc4
STEP: creating a claim
Sep 10 05:40:31.360: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-jwzj
STEP: Creating a pod to test subpath
Sep 10 05:40:31.557: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jwzj" in namespace "provisioning-8552" to be "Succeeded or Failed"
Sep 10 05:40:31.622: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 64.414106ms
Sep 10 05:40:33.687: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129790808s
Sep 10 05:40:35.753: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195644103s
Sep 10 05:40:37.818: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260728751s
Sep 10 05:40:39.886: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328520665s
Sep 10 05:40:41.951: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.393564996s
Sep 10 05:40:44.015: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.458204067s
Sep 10 05:40:46.081: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.523761948s
Sep 10 05:40:48.146: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.589113481s
Sep 10 05:40:50.218: INFO: Pod "pod-subpath-test-dynamicpv-jwzj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.660637723s
STEP: Saw pod success
Sep 10 05:40:50.218: INFO: Pod "pod-subpath-test-dynamicpv-jwzj" satisfied condition "Succeeded or Failed"
Sep 10 05:40:50.283: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-jwzj container test-container-subpath-dynamicpv-jwzj: <nil>
STEP: delete the pod
Sep 10 05:40:50.424: INFO: Waiting for pod pod-subpath-test-dynamicpv-jwzj to disappear
Sep 10 05:40:50.489: INFO: Pod pod-subpath-test-dynamicpv-jwzj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jwzj
Sep 10 05:40:50.489: INFO: Deleting pod "pod-subpath-test-dynamicpv-jwzj" in namespace "provisioning-8552"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 228 lines ...
Sep 10 05:40:16.655: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1186
Sep 10 05:40:16.725: INFO: creating *v1.StatefulSet: csi-mock-volumes-1186-6526/csi-mockplugin-attacher
Sep 10 05:40:16.794: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1186"
Sep 10 05:40:16.869: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1186 to register on node ip-172-20-37-76.us-west-2.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Sep 10 05:40:30.958: INFO: Error getting logs for pod inline-volume-c8x6q: the server rejected our request for an unknown reason (get pods inline-volume-c8x6q)
Sep 10 05:40:31.022: INFO: Deleting pod "inline-volume-c8x6q" in namespace "csi-mock-volumes-1186"
Sep 10 05:40:31.088: INFO: Wait up to 5m0s for pod "inline-volume-c8x6q" to be fully deleted
STEP: Deleting the previously created pod
Sep 10 05:40:37.215: INFO: Deleting pod "pvc-volume-tester-2tdgg" in namespace "csi-mock-volumes-1186"
Sep 10 05:40:37.280: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2tdgg" to be fully deleted
STEP: Checking CSI driver logs
Sep 10 05:40:43.477: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1186
Sep 10 05:40:43.477: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 1bf8a7ef-a239-4255-93f4-626cb153ad30
Sep 10 05:40:43.477: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep 10 05:40:43.477: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Sep 10 05:40:43.477: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-2tdgg
Sep 10 05:40:43.477: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-cb51d8fa21249eecc0bfe78370fb90b634dc0f918e2525f7cb24c7412dea8a26","target_path":"/var/lib/kubelet/pods/1bf8a7ef-a239-4255-93f4-626cb153ad30/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-2tdgg
Sep 10 05:40:43.477: INFO: Deleting pod "pvc-volume-tester-2tdgg" in namespace "csi-mock-volumes-1186"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-1186
STEP: Waiting for namespaces [csi-mock-volumes-1186] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":2,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:08.100: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
Sep 10 05:40:46.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
Sep 10 05:40:46.894: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 10 05:40:47.021: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8751" in namespace "provisioning-8751" to be "Succeeded or Failed"
Sep 10 05:40:47.083: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 61.864136ms
Sep 10 05:40:49.145: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123785421s
Sep 10 05:40:51.209: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187046612s
Sep 10 05:40:53.271: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250012147s
Sep 10 05:40:55.337: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315455047s
Sep 10 05:40:57.399: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377824832s
STEP: Saw pod success
Sep 10 05:40:57.399: INFO: Pod "hostpath-symlink-prep-provisioning-8751" satisfied condition "Succeeded or Failed"
Sep 10 05:40:57.399: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8751" in namespace "provisioning-8751"
Sep 10 05:40:57.466: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8751" to be fully deleted
Sep 10 05:40:57.530: INFO: Creating resource for inline volume
Sep 10 05:40:57.530: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Sep 10 05:40:57.530: INFO: Deleting pod "pod-subpath-test-inlinevolume-n8lv" in namespace "provisioning-8751"
Sep 10 05:40:57.667: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8751" in namespace "provisioning-8751" to be "Succeeded or Failed"
Sep 10 05:40:57.731: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 63.288002ms
Sep 10 05:40:59.795: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127993174s
Sep 10 05:41:01.858: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190881216s
Sep 10 05:41:03.921: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253370275s
Sep 10 05:41:05.983: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315742173s
Sep 10 05:41:08.047: INFO: Pod "hostpath-symlink-prep-provisioning-8751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.379603966s
STEP: Saw pod success
Sep 10 05:41:08.047: INFO: Pod "hostpath-symlink-prep-provisioning-8751" satisfied condition "Succeeded or Failed"
Sep 10 05:41:08.047: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8751" in namespace "provisioning-8751"
Sep 10 05:41:08.114: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8751" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:08.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8751" for this suite.
... skipping 85 lines ...
Sep 10 05:41:03.096: INFO: PersistentVolumeClaim pvc-j8mm7 found but phase is Pending instead of Bound.
Sep 10 05:41:05.164: INFO: PersistentVolumeClaim pvc-j8mm7 found and phase=Bound (6.266725707s)
Sep 10 05:41:05.164: INFO: Waiting up to 3m0s for PersistentVolume local-qkknq to have phase Bound
Sep 10 05:41:05.229: INFO: PersistentVolume local-qkknq found and phase=Bound (65.474912ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ztl7
STEP: Creating a pod to test exec-volume-test
Sep 10 05:41:05.431: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ztl7" in namespace "volume-8231" to be "Succeeded or Failed"
Sep 10 05:41:05.497: INFO: Pod "exec-volume-test-preprovisionedpv-ztl7": Phase="Pending", Reason="", readiness=false. Elapsed: 65.787878ms
Sep 10 05:41:07.565: INFO: Pod "exec-volume-test-preprovisionedpv-ztl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.133963117s
STEP: Saw pod success
Sep 10 05:41:07.565: INFO: Pod "exec-volume-test-preprovisionedpv-ztl7" satisfied condition "Succeeded or Failed"
Sep 10 05:41:07.630: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-ztl7 container exec-container-preprovisionedpv-ztl7: <nil>
STEP: delete the pod
Sep 10 05:41:07.768: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ztl7 to disappear
Sep 10 05:41:07.834: INFO: Pod exec-volume-test-preprovisionedpv-ztl7 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ztl7
Sep 10 05:41:07.834: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ztl7" in namespace "volume-8231"
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:12.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-privileged-pod-5817" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":3,"skipped":56,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:12.857: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-8d0200ac-45f2-4518-902f-29b090108a35
STEP: Creating a pod to test consume secrets
Sep 10 05:41:08.811: INFO: Waiting up to 5m0s for pod "pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633" in namespace "secrets-786" to be "Succeeded or Failed"
Sep 10 05:41:08.873: INFO: Pod "pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633": Phase="Pending", Reason="", readiness=false. Elapsed: 62.105155ms
Sep 10 05:41:10.938: INFO: Pod "pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633": Phase="Running", Reason="", readiness=true. Elapsed: 2.126758639s
Sep 10 05:41:13.000: INFO: Pod "pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188870991s
STEP: Saw pod success
Sep 10 05:41:13.000: INFO: Pod "pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633" satisfied condition "Succeeded or Failed"
Sep 10 05:41:13.062: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633 container secret-volume-test: <nil>
STEP: delete the pod
Sep 10 05:41:13.193: INFO: Waiting for pod pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633 to disappear
Sep 10 05:41:13.255: INFO: Pod pod-secrets-227b4d63-80f5-4e65-a2b8-91241a14c633 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.012 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7201
STEP: Creating statefulset with conflicting port in namespace statefulset-7201
STEP: Waiting until pod test-pod will start running in namespace statefulset-7201
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7201
Sep 10 05:40:58.930: INFO: Observed stateful pod in namespace: statefulset-7201, name: ss-0, uid: 520dbb0b-c3c1-459c-a77e-2b606d553d5e, status phase: Pending. Waiting for statefulset controller to delete.
Sep 10 05:40:58.989: INFO: Observed stateful pod in namespace: statefulset-7201, name: ss-0, uid: 520dbb0b-c3c1-459c-a77e-2b606d553d5e, status phase: Failed. Waiting for statefulset controller to delete.
Sep 10 05:40:58.995: INFO: Observed stateful pod in namespace: statefulset-7201, name: ss-0, uid: 520dbb0b-c3c1-459c-a77e-2b606d553d5e, status phase: Failed. Waiting for statefulset controller to delete.
Sep 10 05:40:58.998: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7201
STEP: Removing pod with conflicting port in namespace statefulset-7201
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7201 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Sep 10 05:41:03.241: INFO: Deleting all statefulset in ns statefulset-7201
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:13.926: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
Sep 10 05:41:14.374: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.417 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 61 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Sep 10 05:41:08.221: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 10 05:41:08.221: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-lrv9
STEP: Creating a pod to test subpath
Sep 10 05:41:08.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lrv9" in namespace "provisioning-3593" to be "Succeeded or Failed"
Sep 10 05:41:08.354: INFO: Pod "pod-subpath-test-inlinevolume-lrv9": Phase="Pending", Reason="", readiness=false. Elapsed: 64.444852ms
Sep 10 05:41:10.420: INFO: Pod "pod-subpath-test-inlinevolume-lrv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130135542s
Sep 10 05:41:12.485: INFO: Pod "pod-subpath-test-inlinevolume-lrv9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19544267s
Sep 10 05:41:14.551: INFO: Pod "pod-subpath-test-inlinevolume-lrv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260983503s
STEP: Saw pod success
Sep 10 05:41:14.551: INFO: Pod "pod-subpath-test-inlinevolume-lrv9" satisfied condition "Succeeded or Failed"
Sep 10 05:41:14.616: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod pod-subpath-test-inlinevolume-lrv9 container test-container-subpath-inlinevolume-lrv9: <nil>
STEP: delete the pod
Sep 10 05:41:14.755: INFO: Waiting for pod pod-subpath-test-inlinevolume-lrv9 to disappear
Sep 10 05:41:14.820: INFO: Pod pod-subpath-test-inlinevolume-lrv9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-lrv9
Sep 10 05:41:14.820: INFO: Deleting pod "pod-subpath-test-inlinevolume-lrv9" in namespace "provisioning-3593"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:15.102: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:41:09.727: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 10 05:41:10.056: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 10 05:41:10.124: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-75gx
STEP: Creating a pod to test subpath
Sep 10 05:41:10.193: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-75gx" in namespace "provisioning-178" to be "Succeeded or Failed"
Sep 10 05:41:10.259: INFO: Pod "pod-subpath-test-inlinevolume-75gx": Phase="Pending", Reason="", readiness=false. Elapsed: 65.887232ms
Sep 10 05:41:12.325: INFO: Pod "pod-subpath-test-inlinevolume-75gx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132556846s
Sep 10 05:41:14.392: INFO: Pod "pod-subpath-test-inlinevolume-75gx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198730426s
Sep 10 05:41:16.459: INFO: Pod "pod-subpath-test-inlinevolume-75gx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.266038937s
STEP: Saw pod success
Sep 10 05:41:16.459: INFO: Pod "pod-subpath-test-inlinevolume-75gx" satisfied condition "Succeeded or Failed"
Sep 10 05:41:16.525: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-subpath-test-inlinevolume-75gx container test-container-volume-inlinevolume-75gx: <nil>
STEP: delete the pod
Sep 10 05:41:16.663: INFO: Waiting for pod pod-subpath-test-inlinevolume-75gx to disappear
Sep 10 05:41:16.729: INFO: Pod pod-subpath-test-inlinevolume-75gx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-75gx
Sep 10 05:41:16.729: INFO: Deleting pod "pod-subpath-test-inlinevolume-75gx" in namespace "provisioning-178"
... skipping 90 lines ...
• [SLOW TEST:5.695 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:40:36.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 10 05:40:36.372: INFO: PodSpec: initContainers in spec.initContainers
Sep 10 05:41:19.160: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bf92c211-aeb2-4dc8-a6c0-accb33afbe15", GenerateName:"", Namespace:"init-container-5861", SelfLink:"", UID:"8a94db9a-f95e-4921-a5f6-ffe66f9b6c5a", ResourceVersion:"5040", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766849236, loc:(*time.Location)(0x9de2b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"372544437"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002e105a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e105b8)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002e105d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e105e8)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-kg6hr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002d60480), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-kg6hr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-kg6hr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-kg6hr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d11a80), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-34-221.us-west-2.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003891650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d11b00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d11b20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d11b28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d11b2c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002b72e90), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849236, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849236, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849236, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766849236, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.34.221", PodIP:"100.96.3.22", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.3.22"}}, StartTime:(*v1.Time)(0xc002e10618), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003891730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0038917a0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://6b8b3ec4d48cd1276e1bdeb198b74132378ca7ceaea9fd894e6988191d86bb63", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d60500), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d604e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002d11baf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:19.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5861" for this suite.


• [SLOW TEST:43.258 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:41:13.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 10 05:41:13.795: INFO: Waiting up to 5m0s for pod "security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359" in namespace "security-context-4538" to be "Succeeded or Failed"
Sep 10 05:41:13.856: INFO: Pod "security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359": Phase="Pending", Reason="", readiness=false. Elapsed: 61.49061ms
Sep 10 05:41:15.921: INFO: Pod "security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125859479s
Sep 10 05:41:17.983: INFO: Pod "security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188061258s
Sep 10 05:41:20.045: INFO: Pod "security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.250137116s
STEP: Saw pod success
Sep 10 05:41:20.045: INFO: Pod "security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359" satisfied condition "Succeeded or Failed"
Sep 10 05:41:20.112: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359 container test-container: <nil>
STEP: delete the pod
Sep 10 05:41:20.252: INFO: Waiting for pod security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359 to disappear
Sep 10 05:41:20.314: INFO: Pod security-context-5c5730f5-b0a7-4a2d-9f06-1790995af359 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.024 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":7,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:20.458: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":66,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:41:17.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 10 05:41:17.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7" in namespace "projected-3670" to be "Succeeded or Failed"
Sep 10 05:41:17.474: INFO: Pod "downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 66.596081ms
Sep 10 05:41:19.543: INFO: Pod "downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13601454s
Sep 10 05:41:21.610: INFO: Pod "downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.20216942s
STEP: Saw pod success
Sep 10 05:41:21.610: INFO: Pod "downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7" satisfied condition "Succeeded or Failed"
Sep 10 05:41:21.675: INFO: Trying to get logs from node ip-172-20-38-104.us-west-2.compute.internal pod downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7 container client-container: <nil>
STEP: delete the pod
Sep 10 05:41:21.817: INFO: Waiting for pod downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7 to disappear
Sep 10 05:41:21.883: INFO: Pod downwardapi-volume-539343ce-9383-40ea-9650-3b8605f45ce7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.011 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:22.027: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 89 lines ...
• [SLOW TEST:38.360 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":2,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:23.442: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
Sep 10 05:41:08.585: INFO: Pod aws-client still exists
Sep 10 05:41:10.526: INFO: Waiting for pod aws-client to disappear
Sep 10 05:41:10.585: INFO: Pod aws-client still exists
Sep 10 05:41:12.526: INFO: Waiting for pod aws-client to disappear
Sep 10 05:41:12.585: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 10 05:41:12.818: INFO: Couldn't delete PD "aws://us-west-2a/vol-0d653c782c3259c2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d653c782c3259c2a is currently attached to i-0a0358b25cc8b1f01
	status code: 400, request id: 448a8869-4455-4c23-b8b3-baa434210707
Sep 10 05:41:18.235: INFO: Couldn't delete PD "aws://us-west-2a/vol-0d653c782c3259c2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d653c782c3259c2a is currently attached to i-0a0358b25cc8b1f01
	status code: 400, request id: 2cf78f40-4e89-48f6-b9de-a83a8b952256
Sep 10 05:41:23.648: INFO: Successfully deleted PD "aws://us-west-2a/vol-0d653c782c3259c2a".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:23.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7540" for this suite.
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:41:10.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:52.227 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:26.686: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 81 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":5,"skipped":58,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:41:06.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:27.133: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:27.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9491" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":7,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:27.242: INFO: Only supported for providers [gce gke] (not aws)
... skipping 94 lines ...
Sep 10 05:41:18.466: INFO: PersistentVolumeClaim pvc-6xhxn found but phase is Pending instead of Bound.
Sep 10 05:41:20.531: INFO: PersistentVolumeClaim pvc-6xhxn found and phase=Bound (10.396715768s)
Sep 10 05:41:20.531: INFO: Waiting up to 3m0s for PersistentVolume local-jxl6g to have phase Bound
Sep 10 05:41:20.595: INFO: PersistentVolume local-jxl6g found and phase=Bound (64.272466ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9w47
STEP: Creating a pod to test subpath
Sep 10 05:41:20.791: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9w47" in namespace "provisioning-5712" to be "Succeeded or Failed"
Sep 10 05:41:20.855: INFO: Pod "pod-subpath-test-preprovisionedpv-9w47": Phase="Pending", Reason="", readiness=false. Elapsed: 64.492824ms
Sep 10 05:41:22.921: INFO: Pod "pod-subpath-test-preprovisionedpv-9w47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130219747s
Sep 10 05:41:24.990: INFO: Pod "pod-subpath-test-preprovisionedpv-9w47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199031672s
STEP: Saw pod success
Sep 10 05:41:24.990: INFO: Pod "pod-subpath-test-preprovisionedpv-9w47" satisfied condition "Succeeded or Failed"
Sep 10 05:41:25.064: INFO: Trying to get logs from node ip-172-20-56-165.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-9w47 container test-container-subpath-preprovisionedpv-9w47: <nil>
STEP: delete the pod
Sep 10 05:41:25.212: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9w47 to disappear
Sep 10 05:41:25.277: INFO: Pod pod-subpath-test-preprovisionedpv-9w47 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9w47
Sep 10 05:41:25.277: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9w47" in namespace "provisioning-5712"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:27.286: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Sep 10 05:41:20.864: INFO: Waiting up to 5m0s for pod "pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f" in namespace "emptydir-3647" to be "Succeeded or Failed"
Sep 10 05:41:20.926: INFO: Pod "pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 61.745148ms
Sep 10 05:41:22.990: INFO: Pod "pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125875912s
Sep 10 05:41:25.062: INFO: Pod "pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198437341s
Sep 10 05:41:27.129: INFO: Pod "pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265371984s
STEP: Saw pod success
Sep 10 05:41:27.130: INFO: Pod "pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f" satisfied condition "Succeeded or Failed"
Sep 10 05:41:27.194: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f container test-container: <nil>
STEP: delete the pod
Sep 10 05:41:27.330: INFO: Waiting for pod pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f to disappear
Sep 10 05:41:27.393: INFO: Pod pod-c2a5b514-c8de-42f6-812c-9ac94de37b5f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":8,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:27.528: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:7.611 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:31.545: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
Sep 10 05:41:27.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 10 05:41:27.626: INFO: Waiting up to 5m0s for pod "var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527" in namespace "var-expansion-9549" to be "Succeeded or Failed"
Sep 10 05:41:27.688: INFO: Pod "var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527": Phase="Pending", Reason="", readiness=false. Elapsed: 62.341051ms
Sep 10 05:41:29.757: INFO: Pod "var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130915901s
Sep 10 05:41:31.820: INFO: Pod "var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19379598s
Sep 10 05:41:33.884: INFO: Pod "var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257710691s
STEP: Saw pod success
Sep 10 05:41:33.884: INFO: Pod "var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527" satisfied condition "Succeeded or Failed"
Sep 10 05:41:33.946: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527 container dapi-container: <nil>
STEP: delete the pod
Sep 10 05:41:34.078: INFO: Waiting for pod var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527 to disappear
Sep 10 05:41:34.141: INFO: Pod var-expansion-7ef20889-3385-4822-8abd-cbb58ab9d527 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.017 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:34.288: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 157 lines ...
• [SLOW TEST:99.972 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 10 05:41:27.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-48e16e05-118d-4cf5-9456-136c225abe9a
STEP: Creating a pod to test consume configMaps
Sep 10 05:41:27.989: INFO: Waiting up to 5m0s for pod "pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38" in namespace "configmap-6775" to be "Succeeded or Failed"
Sep 10 05:41:28.050: INFO: Pod "pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38": Phase="Pending", Reason="", readiness=false. Elapsed: 61.540464ms
Sep 10 05:41:30.114: INFO: Pod "pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124873511s
Sep 10 05:41:32.180: INFO: Pod "pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190890412s
Sep 10 05:41:34.242: INFO: Pod "pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253262337s
STEP: Saw pod success
Sep 10 05:41:34.242: INFO: Pod "pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38" satisfied condition "Succeeded or Failed"
Sep 10 05:41:34.304: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38 container agnhost-container: <nil>
STEP: delete the pod
Sep 10 05:41:34.433: INFO: Waiting for pod pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38 to disappear
Sep 10 05:41:34.495: INFO: Pod pod-configmaps-14253e84-9da9-4fd4-b4a2-18dca7000d38 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.070 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:34.650: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 112 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 10 05:41:31.506: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-edebcfa7-5cba-4785-a1fd-43244b7fc288" in namespace "security-context-test-538" to be "Succeeded or Failed"
Sep 10 05:41:31.572: INFO: Pod "alpine-nnp-false-edebcfa7-5cba-4785-a1fd-43244b7fc288": Phase="Pending", Reason="", readiness=false. Elapsed: 66.161597ms
Sep 10 05:41:33.639: INFO: Pod "alpine-nnp-false-edebcfa7-5cba-4785-a1fd-43244b7fc288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133229495s
Sep 10 05:41:35.706: INFO: Pod "alpine-nnp-false-edebcfa7-5cba-4785-a1fd-43244b7fc288": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200061132s
Sep 10 05:41:37.772: INFO: Pod "alpine-nnp-false-edebcfa7-5cba-4785-a1fd-43244b7fc288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.266790268s
Sep 10 05:41:37.773: INFO: Pod "alpine-nnp-false-edebcfa7-5cba-4785-a1fd-43244b7fc288" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:37.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-538" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:37.991: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":5,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:39.242: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 132 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":4,"skipped":24,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:39.409: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
Sep 10 05:41:34.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 10 05:41:34.906: INFO: Waiting up to 5m0s for pod "security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf" in namespace "security-context-5241" to be "Succeeded or Failed"
Sep 10 05:41:34.973: INFO: Pod "security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 66.397143ms
Sep 10 05:41:37.041: INFO: Pod "security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134317312s
Sep 10 05:41:39.109: INFO: Pod "security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.20266541s
STEP: Saw pod success
Sep 10 05:41:39.109: INFO: Pod "security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf" satisfied condition "Succeeded or Failed"
Sep 10 05:41:39.176: INFO: Trying to get logs from node ip-172-20-37-76.us-west-2.compute.internal pod security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf container test-container: <nil>
STEP: delete the pod
Sep 10 05:41:39.317: INFO: Waiting for pod security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf to disappear
Sep 10 05:41:39.383: INFO: Pod security-context-2b7caf66-5ce0-40b5-8adc-a6aae5c9c2cf no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 14 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Sep 10 05:41:31.956: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac" in namespace "security-context-test-7756" to be "Succeeded or Failed"
Sep 10 05:41:32.017: INFO: Pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac": Phase="Pending", Reason="", readiness=false. Elapsed: 60.339157ms
Sep 10 05:41:34.078: INFO: Pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121957558s
Sep 10 05:41:36.139: INFO: Pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182316524s
Sep 10 05:41:38.208: INFO: Pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251573766s
Sep 10 05:41:40.269: INFO: Pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.312396197s
Sep 10 05:41:40.269: INFO: Pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac" satisfied condition "Succeeded or Failed"
Sep 10 05:41:40.331: INFO: Got logs for pod "busybox-privileged-true-1058b3f8-08f0-41ca-bdca-49614fa26bac": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:40.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7756" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:40.503: INFO: Driver local doesn't support ext4 -- skipping
... skipping 97 lines ...
Sep 10 05:41:29.814: INFO: Waiting for pod aws-client to disappear
Sep 10 05:41:29.876: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 10 05:41:29.876: INFO: Deleting PersistentVolumeClaim "pvc-7dqs8"
Sep 10 05:41:29.940: INFO: Deleting PersistentVolume "aws-5mqr5"
Sep 10 05:41:30.222: INFO: Couldn't delete PD "aws://us-west-2a/vol-09fb7fa6f018ca698", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09fb7fa6f018ca698 is currently attached to i-0a0358b25cc8b1f01
	status code: 400, request id: ed7b1374-edb9-4fd9-9216-753ffe300ae1
Sep 10 05:41:35.650: INFO: Couldn't delete PD "aws://us-west-2a/vol-09fb7fa6f018ca698", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09fb7fa6f018ca698 is currently attached to i-0a0358b25cc8b1f01
	status code: 400, request id: 859b18a0-6faf-4f80-a757-59e29362a397
Sep 10 05:41:41.100: INFO: Successfully deleted PD "aws://us-west-2a/vol-09fb7fa6f018ca698".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 10 05:41:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8883" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:6.941 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":82,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:41.520: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 55 lines ...
• [SLOW TEST:14.397 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 10 05:41:41.564: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 53 lines ...
• [SLOW TEST:26.858 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Sep 10 05:41:41.659: INFO: Running '/tmp/kubectl2591606010/kubectl --server=https://api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7393 cluster-info dump'
Sep 10 05:41:44.434: INFO: stderr: ""
Sep 10 05:41:44.436: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6111\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-34-221.us-west-2.compute.internal\",\n                \"uid\": \"48eb8181-5534-4afc-b70e-d97953bc8a4a\",\n                \"resourceVersion\": \"5874\",\n                \"creationTimestamp\": \"2021-09-10T05:36:56Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-34-221.us-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"mounted_volume_expand\": \"mounted-volume-expand-312\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-34-221.us-west-2.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"us-west-2\",\n                    \"topology.kubernetes.io/zone\": \"us-west-2a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-volume-expand-9966\\\":\\\"ip-172-20-34-221.us-west-2.compute.internal\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///us-west-2a/i-02fcd4692ca637b53\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3964584Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3862184Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:37:00Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:37Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:56Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:37Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:56Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:37Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:56Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:37Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:06Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.34.221\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.220.79.145\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-34-220-79-145.us-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2b68a792368da1210e402d517c9272\",\n                    \"systemUUID\": \"ec2b68a7-9236-8da1-210e-402d517c9272\",\n                    \"bootID\": \"133fe969-966b-4eb5-ad47-06abdfd5735a\",\n                    \"kernelVersion\": \"5.11.0-1016-aws\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 18412631\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/aws-ebs/aws://us-west-2a/vol-09a892ffafeb5e1a4\",\n                    \"kubernetes.io/csi/csi-hostpath-volume-expand-9966^ac8a24ed-11f9-11ec-b2c0-4ec12f1cc07d\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-volume-expand-9966^ac8a24ed-11f9-11ec-b2c0-4ec12f1cc07d\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/aws-ebs/aws://us-west-2a/vol-09a892ffafeb5e1a4\",\n                        \"devicePath\": \"/dev/xvdbz\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                \"resourceVersion\": \"5051\",\n                \"creationTimestamp\": \"2021-09-10T05:35:48Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"master-us-west-2a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"us-west-2\",\n                    \"topology.kubernetes.io/zone\": \"us-west-2a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///us-west-2a/i-02f21e556c94dcd7c\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3780268Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3677868Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:36:20Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:19Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:35:44Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:19Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:35:44Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:19Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:35:44Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:19Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:18Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.37.129\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.215.112.57\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-34-215-112-57.us-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec23a3530d6c5600b38062777f3e55ab\",\n                    \"systemUUID\": \"ec23a353-0d6c-5600-b380-62777f3e55ab\",\n                    \"bootID\": \"0c6aa3f2-7024-4242-b92d-6d014c0ef509\",\n                    \"kernelVersion\": \"5.11.0-1016-aws\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\"\n                        ],\n                        \"sizeBytes\": 172004323\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64@sha256:f29008c0c91003edb5e5d87c6e7242e31f7bb814af98c7b885e75aa96f5c37de\",\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 126880211\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64@sha256:6980d349c4d2c4f0c41ca052ed5532f7b947f9ef0d59f0cefb2e4f99feff2070\",\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 121092409\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 114167313\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 113235474\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 51890488\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\"\n                        ],\n                        \"sizeBytes\": 25622039\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-37-76.us-west-2.compute.internal\",\n                \"uid\": \"e0884f2d-6742-49c6-812e-d3f37a8934b8\",\n                \"resourceVersion\": \"5738\",\n                \"creationTimestamp\": \"2021-09-10T05:36:51Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-37-76.us-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"us-west-2\",\n                    \"topology.kubernetes.io/zone\": \"us-west-2a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-mock-csi-mock-volumes-2213\\\":\\\"csi-mock-csi-mock-volumes-2213\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///us-west-2a/i-0a0358b25cc8b1f01\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3964584Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3862184Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:37:00Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:31Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:51Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:31Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:51Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:31Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:51Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:31Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:01Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.37.76\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"34.220.213.19\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-37-76.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-37-76.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-34-220-213-19.us-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2b3c95b8193ae60f38393c75a5055e\",\n                    \"systemUUID\": \"ec2b3c95-b819-3ae6-0f38-393c75a5055e\",\n                    \"bootID\": \"879a5f1a-0fd7-48bb-a849-002e3924b720\",\n                    \"kernelVersion\": \"5.11.0-1016-aws\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 95843946\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:853b221d3341add7aaadf5f81dd088ea943ab9c918766e295321294b035f3f3e\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799391\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2213^4\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2213^4\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-38-104.us-west-2.compute.internal\",\n                \"uid\": \"df456c33-cb9d-4931-b6d4-00f79370f631\",\n                \"resourceVersion\": \"6043\",\n                \"creationTimestamp\": \"2021-09-10T05:37:00Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-38-104.us-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"us-west-2\",\n                    \"topology.kubernetes.io/zone\": \"us-west-2a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-mock-csi-mock-volumes-3165\\\":\\\"csi-mock-csi-mock-volumes-3165\\\",\\\"csi-mock-csi-mock-volumes-5388\\\":\\\"csi-mock-csi-mock-volumes-5388\\\"}\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.5.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.5.0/24\"\n                ],\n                \"providerID\": \"aws:///us-west-2a/i-09f22ff9c6c5e5d8e\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3964592Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3862192Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:37:10Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:10Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:30Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:30Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:30Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:41:30Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.38.104\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.167.199.191\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-167-199-191.us-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2a7d78036a3d13b3b9ef698ebde3dc\",\n                    \"systemUUID\": \"ec2a7d78-036a-3d13-b3b9-ef698ebde3dc\",\n                    \"bootID\": \"4c240c00-4941-4417-b480-724eda17a4c9\",\n                    \"kernelVersion\": \"5.11.0-1016-aws\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 112029652\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 21205045\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 18412631\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 13707249\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 9068367\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 8223849\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-56-165.us-west-2.compute.internal\",\n                \"uid\": \"ef4937ab-b3eb-4f0e-acb5-3043de5c4632\",\n                \"resourceVersion\": \"5547\",\n                \"creationTimestamp\": \"2021-09-10T05:36:56Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"us-west-2\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"us-west-2a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-us-west-2a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-56-165.us-west-2.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-56-165.us-west-2.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"us-west-2\",\n                    \"topology.kubernetes.io/zone\": \"us-west-2a\"\n                },\n                \"annotations\": {\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///us-west-2a/i-0df6c1f25b6689cea\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3964584Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3862184Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:37:00Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\",\n                        \"reason\": \"RouteCreated\",\n                        \"message\": \"RouteController created a route\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:40:57Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:56Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:40:57Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:56Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:40:57Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:56Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-09-10T05:40:57Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:06Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.56.165\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"54.185.177.200\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-54-185-177-200.us-west-2.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec248f96659ca8ad1bcc15d6b78bcc78\",\n                    \"systemUUID\": \"ec248f96-659c-a8ad-1bcc-15d6b78bcc78\",\n                    \"bootID\": \"141f7f73-7cbc-4745-8aba-bb7ca57d4ed3\",\n                    \"kernelVersion\": \"5.11.0-1016-aws\",\n                    \"osImage\": \"Ubuntu 20.04.3 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.9\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 105127625\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:853b221d3341add7aaadf5f81dd088ea943ab9c918766e295321294b035f3f3e\",\n                            \"docker.io/library/nginx:latest\"\n                        ],\n                        \"sizeBytes\": 53799391\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 50002177\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 40765006\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 21212251\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 20194320\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 20103959\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 20096832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 15209393\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 13995876\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 8415088\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 8279778\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 6979365\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 732746\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"2520\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f37341d047e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"47a02970-2159-40a0-b0d4-c4e527735838\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"444\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:24Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f3e5533af92\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1a91faea-ba55-4a5e-af49-d7742cbf6ee8\",\n                \"resourceVersion\": \"87\",\n                \"creationTimestamp\": \"2021-09-10T05:36:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"450\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:51Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:51Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f406161b848\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"12c41365-bb64-4568-876c-67c0238be0ae\",\n                \"resourceVersion\": \"122\",\n                \"creationTimestamp\": \"2021-09-10T05:37:00Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"547\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:00Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:00Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f42f15e2dee\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"68f16829-5104-457b-97be-f6b4d44eecf5\",\n                \"resourceVersion\": \"135\",\n                \"creationTimestamp\": \"2021-09-10T05:37:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"593\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-26bxp to ip-172-20-38-104.us-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:11Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f43119c41df\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ff2dd259-0d65-40cf-bef1-86dbf60502d3\",\n                \"resourceVersion\": \"136\",\n                \"creationTimestamp\": \"2021-09-10T05:37:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"642\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:11Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f43a3156dce\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5e87b9e5-ad49-47cb-a7ca-dd7d06f62c38\",\n                \"resourceVersion\": \"144\",\n                \"creationTimestamp\": \"2021-09-10T05:37:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"642\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 2.440620284s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f43ac931361\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"36c388f1-1697-44b7-aea7-58666761b47d\",\n                \"resourceVersion\": \"145\",\n                \"creationTimestamp\": \"2021-09-10T05:37:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"642\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp.16a35f43b1bd754d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5b604aa5-d56b-4364-b397-9ea84c90fbce\",\n                \"resourceVersion\": \"146\",\n                \"creationTimestamp\": \"2021-09-10T05:37:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"642\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-xvz58.16a35f43585f658b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1314df67-c55e-4c04-9e4d-90a8e41c47cb\",\n                \"resourceVersion\": \"142\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-xvz58\",\n                \"uid\": \"54912f58-a471-4e88-b176-672eeca87a88\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"653\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-xvz58 to ip-172-20-34-221.us-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-xvz58.16a35f4378723bcb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ed109a3a-f283-4025-ac03-621c8aac90b0\",\n                \"resourceVersion\": \"143\",\n                \"creationTimestamp\": \"2021-09-10T05:37:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-xvz58\",\n                \"uid\": \"54912f58-a471-4e88-b176-672eeca87a88\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"656\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:13Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-xvz58.16a35f43d2881921\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d637b9c8-7c40-4bbe-b1e9-68ff97e7bcfe\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2021-09-10T05:37:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-xvz58\",\n                \"uid\": \"54912f58-a471-4e88-b176-672eeca87a88\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"656\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 1.511346909s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-xvz58.16a35f43dcc490dc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"db7e5981-b194-4f22-8847-344ab68dc1cc\",\n                \"resourceVersion\": \"148\",\n                \"creationTimestamp\": \"2021-09-10T05:37:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-xvz58\",\n                \"uid\": \"54912f58-a471-4e88-b176-672eeca87a88\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"656\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:15Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-xvz58.16a35f43e18eff62\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"837fee80-18f0-4ad9-9106-08a077b42ed9\",\n                \"resourceVersion\": \"149\",\n                \"creationTimestamp\": \"2021-09-10T05:37:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-xvz58\",\n                \"uid\": \"54912f58-a471-4e88-b176-672eeca87a88\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"656\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:15Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16a35f373441c9c1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"49a5f840-306b-474d-9fbf-6ae64688f9a4\",\n                \"resourceVersion\": \"67\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"70c1350d-376c-499d-84d5-75cca4d1a973\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"407\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-26bxp\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.16a35f4357a3b2e2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0aea2f4f-719b-43a7-af1c-2cb45731aa3a\",\n                \"resourceVersion\": \"141\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"70c1350d-376c-499d-84d5-75cca4d1a973\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"651\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-xvz58\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f37337d9219\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"af4d019c-4ac0-4d4d-99ea-ac811084f9b9\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"442\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:24Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f3e54bd5d0c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6e91c541-9248-4985-9271-693b005b1bff\",\n                \"resourceVersion\": \"85\",\n                \"creationTimestamp\": \"2021-09-10T05:36:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"445\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:51Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:51Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f4060f99174\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c264742f-c52e-4109-ab44-feed9f488fa6\",\n                \"resourceVersion\": \"121\",\n                \"creationTimestamp\": \"2021-09-10T05:37:00Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"544\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:00Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:00Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f42b5db6351\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d4996811-6b26-4e68-a8d6-8445aefe056d\",\n                \"resourceVersion\": \"133\",\n                \"creationTimestamp\": \"2021-09-10T05:37:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"592\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-84d4cfd89c-d4f4h to ip-172-20-56-165.us-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:10Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f42d63af81c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c9a04208-34a5-4e90-9ed4-9c8bf9ba12b5\",\n                \"resourceVersion\": \"134\",\n                \"creationTimestamp\": \"2021-09-10T05:37:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"634\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:10Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f433806c7dd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9b75fc3b-4d13-4cfa-93a2-35d58df33eef\",\n                \"resourceVersion\": \"137\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"634\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\" in 1.640731315s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f4341e9193e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"59f64c56-2d1a-402f-8135-e5a6aa5bd36f\",\n                \"resourceVersion\": \"138\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"634\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h.16a35f4346d27ab6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fb4b00b9-2b59-4e91-a739-c2b7ed070bca\",\n                \"resourceVersion\": \"139\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"634\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c.16a35f373382fd53\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dbe3c9bc-ad2d-4f19-bf4e-f14bd4e2f077\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"uid\": \"8c918736-93aa-4b08-8b87-cf6549bee7e8\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"405\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-84d4cfd89c-d4f4h\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.16a35f371599b8c4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9b21238c-efc1-4094-b6ef-ce9dbb8d60cd\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"7e7358a6-bbd9-4e46-ab49-c6427461ae00\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"236\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-84d4cfd89c to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16a35f3715cd03e3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7edc1602-da86-4631-ab80-d2c381a9bcab\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"2d2ff3dc-213d-4a86-ad72-41c6b5d59fd5\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"229\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.16a35f4356dee295\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ec8cd9d0-580c-4358-843a-35014687fe40\",\n                \"resourceVersion\": \"140\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"2d2ff3dc-213d-4a86-ad72-41c6b5d59fd5\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"650\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-82ctf.16a35f373490d858\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d9d94a41-49eb-4235-afa2-f991ad4aca2c\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-82ctf\",\n                \"uid\": \"8b55e143-cf3e-4b0e-9b5f-46b70546a9ee\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"443\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-59b7d7865d-82ctf to ip-172-20-37-129.us-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-82ctf.16a35f3750ea8915\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4242d88d-5042-448a-88d4-9af7ecbd8b54\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2021-09-10T05:36:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-82ctf\",\n                \"uid\": \"8b55e143-cf3e-4b0e-9b5f-46b70546a9ee\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"447\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-82ctf.16a35f3754517a42\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d04c17c5-e82e-4fa8-9d69-bab300a224d4\",\n                \"resourceVersion\": \"71\",\n                \"creationTimestamp\": \"2021-09-10T05:36:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-82ctf\",\n                \"uid\": \"8b55e143-cf3e-4b0e-9b5f-46b70546a9ee\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"447\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-82ctf.16a35f37597ea6c7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fb47fc1e-2ab8-4202-80b8-e39fc07d63f7\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-09-10T05:36:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d-82ctf\",\n                \"uid\": \"8b55e143-cf3e-4b0e-9b5f-46b70546a9ee\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"447\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d.16a35f373447db02\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fda7ed40-b489-4191-acf7-7a3963873332\",\n                \"resourceVersion\": \"69\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-59b7d7865d\",\n                \"uid\": \"33c6dea9-8d7b-44c9-919d-ffb6457a3fd7\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"406\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-59b7d7865d-82ctf\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.16a35f3715e6a7e8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"378888c7-c5ee-4855-b10a-316db3d141cf\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"705376f9-a7f6-4036-b8fe-051f3c945ab4\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"242\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-59b7d7865d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal.16a35f26836d2c71\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ba8fa5b1-7709-45c0-bc3c-c9d73c958f7c\",\n                \"resourceVersion\": \"21\",\n                \"creationTimestamp\": \"2021-09-10T05:35:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"4038fc0cd1fc1c3e0997286dc4996950\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:09Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal.16a35f29f4186d34\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"585b23d7-ce36-4c52-9f60-5ea259fd1cc3\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-09-10T05:35:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"4038fc0cd1fc1c3e0997286dc4996950\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 14.775155341s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:23Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a19f2c177\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9aeb7510-a65d-420f-a1d9-e20092ebce59\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-09-10T05:35:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"4038fc0cd1fc1c3e0997286dc4996950\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a21d691e9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"12ff409d-99ef-4765-9d03-ae99e0a6eccd\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-09-10T05:35:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"4038fc0cd1fc1c3e0997286dc4996950\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal.16a35f268be81848\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"99783b0a-0bc3-4f8d-979e-5a7a9fa9dc5f\",\n                \"resourceVersion\": \"23\",\n                \"creationTimestamp\": \"2021-09-10T05:35:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"5e2779cbe16febbdbd0382471494f20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:09Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a2f6e6ccb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3e95b6bd-128f-4cc2-bbd9-ec1cb684cdbd\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2021-09-10T05:35:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"5e2779cbe16febbdbd0382471494f20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 15.628377158s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a339ad0e1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0e88f75f-6a62-45a9-a4d8-a30b6e455b82\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-09-10T05:35:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"5e2779cbe16febbdbd0382471494f20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a3cae4318\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"53ce1712-0231-4f03-8639-ab51e2d9e7ce\",\n                \"resourceVersion\": \"42\",\n                \"creationTimestamp\": \"2021-09-10T05:35:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"5e2779cbe16febbdbd0382471494f20a\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:25Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-js47d.16a35f3722271235\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8cd381e6-54c9-4565-a7ce-f446e6c96476\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-js47d\",\n                \"uid\": \"91862b5f-e915-4710-9eb4-38aa351853ca\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"429\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-js47d to ip-172-20-37-129.us-west-2.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-js47d.16a35f37306a324b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e7004e97-4442-4c55-9019-2c3011b42786\",\n                \"resourceVersion\": \"63\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-js47d\",\n                \"uid\": \"91862b5f-e915-4710-9eb4-38aa351853ca\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"432\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"kube-api-access-x8kxj\\\" : configmap \\\"kube-root-ca.crt\\\" not found\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-js47d.16a35f37618896b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ea1cb997-ac97-4997-84b7-10fed81ed0ad\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2021-09-10T05:36:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-js47d\",\n                \"uid\": \"91862b5f-e915-4710-9eb4-38aa351853ca\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"432\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-js47d.16a35f3764778e17\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7d00d696-2539-4372-85e3-c2ab0d27a1a0\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-09-10T05:36:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-js47d\",\n                \"uid\": \"91862b5f-e915-4710-9eb4-38aa351853ca\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"432\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-js47d.16a35f37695dd40d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f6fcb161-b63e-4510-b61e-bf6826ebee51\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-09-10T05:36:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-js47d\",\n                \"uid\": \"91862b5f-e915-4710-9eb4-38aa351853ca\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"432\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.16a35f378fa1906e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6eefbdd4-aa50-4503-98d5-5d70958375d9\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-09-10T05:36:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"5a5cc862-d5f5-4a69-bcf7-d36ad7fba4fe\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"466\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-37-129_a8a12756-c243-4704-b56b-3decca0574c7 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-37-129_a8a12756-c243-4704-b56b-3decca0574c7\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:22Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.16a35f37219dc9af\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0ea59d77-b17a-4d81-848f-1286c3fc570b\",\n                \"resourceVersion\": \"61\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"ddabc270-f2b0-4841-9ce7-4787619aa879\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"255\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-js47d\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f26636abbdb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0c605891-39b7-4182-98fb-17aad18684b2\",\n                \"resourceVersion\": \"19\",\n                \"creationTimestamp\": \"2021-09-10T05:35:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:08Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f27b223072f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"26cdd0ad-4767-4e4a-b49e-adf15ba5a6ad\",\n                \"resourceVersion\": \"24\",\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\" in 5.61565329s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:14Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f283b75241b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95dd624e-adf6-4d9d-9e20-ec04e6253cdd\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:38Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f285ee4cb7d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"35b82d04-3a13-4aed-b25f-abd9a4b670a3\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-09-10T05:35:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:17Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:38Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f2869260495\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"deed5f1d-1fbb-4a70-8463-25787a1a5e61\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-09-10T05:35:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:17Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f28ba1b7006\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3a400c9e-7cb9-4656-bfe4-b0f9df605e05\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-09-10T05:35:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:18Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f28c12908f0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"633b4f5e-a1e7-4776-9cd7-8f8dc3a202e4\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-09-10T05:35:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:18Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal.16a35f2d444163e1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"82278a4b-9afa-4fd1-9487-9c1d03ba7289\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-09-10T05:35:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:38Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal.16a35f26836da8d9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1f7ec592-0fd6-4706-b0c4-b57a9c4387b4\",\n                \"resourceVersion\": \"22\",\n                \"creationTimestamp\": \"2021-09-10T05:35:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"8393b35378800da1b1f1ee218f8046ac\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:09Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a17a48b22\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c460aaed-85ba-40a8-acc1-9dd81ba28e28\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-09-10T05:35:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"8393b35378800da1b1f1ee218f8046ac\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\\\" in 15.371511874s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a1b74bdbe\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"934215c9-63a2-44d6-baf3-f30d7c092493\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-09-10T05:35:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"8393b35378800da1b1f1ee218f8046ac\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:05Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal.16a35f2a21faa278\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5f7f2576-66e6-454d-a805-8503a169758c\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-09-10T05:35:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"8393b35378800da1b1f1ee218f8046ac\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:24Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:05Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal.16a35f2cccb42d76\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5465da6e-74e8-4f1b-a012-4cb20c5c5d76\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-09-10T05:35:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"8393b35378800da1b1f1ee218f8046ac\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:36Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:05Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal.16a35f2fd58bf20f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d6732502-bc4d-4fd7-84f5-856384ec30c0\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-09-10T05:35:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"8393b35378800da1b1f1ee218f8046ac\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"BackOff\",\n            \"message\": \"Back-off restarting failed container\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:49Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:50Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.16a35f33d1d4b792\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"aff2d265-b11d-4207-a8e9-8dd3b623be55\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-09-10T05:36:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"6c44c2d5-6537-4fbc-acef-2f7b6364dbf7\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"270\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-37-129_bea1a36d-7987-4f09-83ef-b29629561118 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:06Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.16a35f3700075693\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2ab7f448-b68a-425c-ba7d-4fdf804db24f\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"4e95dc17-9ece-4475-9989-5c52a545a827\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"232\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:19Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:20Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal.16a35f400dc3a011\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"517e230e-9e18-4751-a719-67893559ba8a\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2021-09-10T05:36:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal\",\n                \"uid\": \"5157d36fb4fd5ea7e1313150e7c96388\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal.16a35f401261bf4b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dd6604a8-d6ad-432b-9457-75fbbd8d9799\",\n                \"resourceVersion\": \"107\",\n                \"creationTimestamp\": \"2021-09-10T05:36:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal\",\n                \"uid\": \"5157d36fb4fd5ea7e1313150e7c96388\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal.16a35f401f696ae4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1fed435f-28d0-48d7-8b68-55533e7097be\",\n                \"resourceVersion\": \"110\",\n                \"creationTimestamp\": \"2021-09-10T05:36:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal\",\n                \"uid\": \"5157d36fb4fd5ea7e1313150e7c96388\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-34-221.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:59Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal.16a35f265f0a66a7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"66c578ca-518f-4f44-8271-14acee2e2518\",\n                \"resourceVersion\": \"18\",\n                \"creationTimestamp\": \"2021-09-10T05:35:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"366a695e3ffedf02807385d36463b1f3\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:08Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal.16a35f283ae3edde\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c442f944-6a58-43f4-9cf0-a1d140f426ba\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"366a695e3ffedf02807385d36463b1f3\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal.16a35f285154aac8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dd0f411d-4441-443e-bf80-8f2a50b3b82c\",\n                \"resourceVersion\": \"28\",\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"366a695e3ffedf02807385d36463b1f3\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal.16a35f3ec87f84d8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41399007-6123-4195-92f7-27b0b6a77cc8\",\n                \"resourceVersion\": \"89\",\n                \"creationTimestamp\": \"2021-09-10T05:36:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal\",\n                \"uid\": \"1521b3575ab46785b71234cf5afcfb34\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-76.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:53Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal.16a35f3ece9105bf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"66db888a-c558-4f23-9f4d-b7769fc9e571\",\n                \"resourceVersion\": \"90\",\n                \"creationTimestamp\": \"2021-09-10T05:36:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal\",\n                \"uid\": \"1521b3575ab46785b71234cf5afcfb34\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-76.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:53Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal.16a35f3ed489c77d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bdae872d-9936-4ce7-89a7-5fdb2b92c62e\",\n                \"resourceVersion\": \"91\",\n                \"creationTimestamp\": \"2021-09-10T05:36:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal\",\n                \"uid\": \"1521b3575ab46785b71234cf5afcfb34\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-76.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:53Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal.16a35f40c23874dc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8a670769-25af-4315-8b72-77c4f5ab5ebf\",\n                \"resourceVersion\": \"126\",\n                \"creationTimestamp\": \"2021-09-10T05:37:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal\",\n                \"uid\": \"9431b69ef3ca963b1d7e8e1626ff4504\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:01Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal.16a35f40c7b5b794\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"88fbd1ba-2f67-4475-8a21-82588750c2c9\",\n                \"resourceVersion\": \"127\",\n                \"creationTimestamp\": \"2021-09-10T05:37:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal\",\n                \"uid\": \"9431b69ef3ca963b1d7e8e1626ff4504\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:01Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal.16a35f40cd2d6ff7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e7b96de2-8e6f-4e67-b61c-02e770c43c45\",\n                \"resourceVersion\": \"128\",\n                \"creationTimestamp\": \"2021-09-10T05:37:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal\",\n                \"uid\": \"9431b69ef3ca963b1d7e8e1626ff4504\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-38-104.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:37:01Z\",\n            \"lastTimestamp\": \"2021-09-10T05:37:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal.16a35f40153ea979\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dde1aec4-997e-48b3-bae1-8aad53786289\",\n                \"resourceVersion\": \"108\",\n                \"creationTimestamp\": \"2021-09-10T05:36:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal\",\n                \"uid\": \"6cdecdbf2da30216f600379c43dbec48\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal.16a35f401a30af9a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b46a3f39-345e-415c-af92-0bceb03a43fa\",\n                \"resourceVersion\": \"109\",\n                \"creationTimestamp\": \"2021-09-10T05:36:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal\",\n                \"uid\": \"6cdecdbf2da30216f600379c43dbec48\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal.16a35f401fe1ce3a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f4315107-ec08-42ed-8359-0f3e91f812a3\",\n                \"resourceVersion\": \"111\",\n                \"creationTimestamp\": \"2021-09-10T05:36:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal\",\n                \"uid\": \"6cdecdbf2da30216f600379c43dbec48\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-165.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:36:59Z\",\n            \"lastTimestamp\": \"2021-09-10T05:36:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal.16a35f2665e4d23a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a05bb25a-ea1d-4b86-a640-bcda21e4ae65\",\n                \"resourceVersion\": \"20\",\n                \"creationTimestamp\": \"2021-09-10T05:35:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"6c8477f7e2b4b40106c022734372d908\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:08Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal.16a35f283b7392ce\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"670504f2-d406-47d4-9f3d-89686acf8ae4\",\n                \"resourceVersion\": \"26\",\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"6c8477f7e2b4b40106c022734372d908\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal.16a35f2859dd8f37\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3caa5c07-5c4c-426f-aa13-8584003c949f\",\n                \"resourceVersion\": \"29\",\n                \"creationTimestamp\": \"2021-09-10T05:35:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"uid\": \"6c8477f7e2b4b40106c022734372d908\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-129.us-west-2.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.16a35f30893134c4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3365cfea-5714-429b-839d-085bc079195e\",\n                \"resourceVersion\": \"13\",\n                \"creationTimestamp\": \"2021-09-10T05:35:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"7c0f3be6-28b3-4c4d-a3bd-2a6bd80ea47d\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"215\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-37-129_9f6b713f-c9a9-4254-94b3-bf15b9450bba became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-09-10T05:35:52Z\",\n            \"lastTimestamp\": \"2021-09-10T05:35:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6126\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6126\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2360ca8c-7e26-41d5-ba04-752bd68767c8\",\n                \"resourceVersion\": \"231\",\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6128\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ddabc270-f2b0-4841-9ce7-4787619aa879\",\n                \"resourceVersion\": \"465\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-10T05:35:57Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6132\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2d2ff3dc-213d-4a86-ad72-41c6b5d59fd5\",\n                \"resourceVersion\": \"691\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-10T05:37:15Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:15Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-10T05:37:17Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-5dc785954d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7e7358a6-bbd9-4e46-ab49-c6427461ae00\",\n                \"resourceVersion\": \"664\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-10T05:35:54Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-10T05:37:13Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:37:13Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-10T05:37:13Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-84d4cfd89c\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"705376f9-a7f6-4036-b8fe-051f3c945ab4\",\n                \"resourceVersion\": \"464\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-10T05:35:55Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-beta.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-10T05:36:22Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:22Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-09-10T05:36:22Z\",\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-59b7d7865d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6136\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"70c1350d-376c-499d-84d5-75cca4d1a973\",\n                \"resourceVersion\": \"690\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"2d2ff3dc-213d-4a86-ad72-41c6b5d59fd5\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"5dc785954d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"5dc785954d\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8c918736-93aa-4b08-8b87-cf6549bee7e8\",\n                \"resourceVersion\": \"663\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"7e7358a6-bbd9-4e46-ab49-c6427461ae00\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"84d4cfd89c\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"84d4cfd89c\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"33c6dea9-8d7b-44c9-919d-ffb6457a3fd7\",\n                \"resourceVersion\": \"462\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"59b7d7865d\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"705376f9-a7f6-4036-b8fe-051f3c945ab4\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"59b7d7865d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"59b7d7865d\",\n                            \"version\": \"v1.22.0-beta.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"6137\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-26bxp\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5c7acd42-89ba-46b5-883e-451e509ab0c7\",\n                \"resourceVersion\": \"673\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"70c1350d-376c-499d-84d5-75cca4d1a973\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-gkzts\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-gkzts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-38-104.us-west-2.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:11Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:11Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.38.104\",\n                \"podIP\": \"100.96.5.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.5.2\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:37:11Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:37:14Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://a6ecb339319b6f6eaac2ceb73501665d84ccc75380a89a5f13a73ed794bd2fdb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-xvz58\",\n                \"generateName\": \"coredns-5dc785954d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"54912f58-a471-4e88-b176-672eeca87a88\",\n                \"resourceVersion\": \"686\",\n                \"creationTimestamp\": \"2021-09-10T05:37:12Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-5dc785954d\",\n                        \"uid\": \"70c1350d-376c-499d-84d5-75cca4d1a973\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-kqgxx\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-kqgxx\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-34-221.us-west-2.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:12Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:17Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:17Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:12Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.221\",\n                \"podIP\": \"100.96.3.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.3.2\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:37:12Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:37:15Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                        \"containerID\": \"containerd://8de6a838bf05de68ec6b603c292b696a162da0be6b7ef88f05927718b483b5d5\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-d4f4h\",\n                \"generateName\": \"coredns-autoscaler-84d4cfd89c-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"373de23e-3285-407d-8de4-cf380f1178b1\",\n                \"resourceVersion\": \"662\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                        \"uid\": \"8c918736-93aa-4b08-8b87-cf6549bee7e8\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-whk9x\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-whk9x\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-56-165.us-west-2.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:10Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:13Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:13Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:10Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.165\",\n                \"podIP\": \"100.96.4.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.4.2\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:37:10Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:37:12Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                        \"imageID\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                        \"containerID\": \"containerd://6f95946ce68209f8bb1e502084876b1a4916c93a82d511c3728841108db4d52a\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-59b7d7865d-82ctf\",\n                \"generateName\": \"dns-controller-59b7d7865d-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8b55e143-cf3e-4b0e-9b5f-46b70546a9ee\",\n                \"resourceVersion\": \"461\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"59b7d7865d\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-59b7d7865d\",\n                        \"uid\": \"33c6dea9-8d7b-44c9-919d-ffb6457a3fd7\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-qnrsx\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-qnrsx\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:22Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:22Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:36:20Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:36:21Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-beta.1\",\n                        \"imageID\": \"sha256:2babc7e3f10c2c20ad7fd8cc592d6de686b248a7801de5192198db8ca008ec60\",\n                        \"containerID\": \"containerd://4648debb5d5112869e5ed3f613895be26a914ab5bcca1f64f1a5f88106419811\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"35f029f0-78f9-4468-9105-e3218a67effb\",\n                \"resourceVersion\": \"522\",\n                \"creationTimestamp\": \"2021-09-10T05:36:39Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"4038fc0cd1fc1c3e0997286dc4996950\",\n                    \"kubernetes.io/config.mirror\": \"4038fc0cd1fc1c3e0997286dc4996950\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:34:52.252259014Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                        \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --grpc-port=3997 --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:25Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:25Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:34:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:35:24Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                        \"containerID\": \"containerd://f439b8d4a9c27921c0ee1ce03bc8811a9de8560bcaac0b45748c42a2a54db016\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ce664029-9365-4fc2-ad45-84a7dc4ffd9a\",\n                \"resourceVersion\": \"523\",\n                \"creationTimestamp\": \"2021-09-10T05:36:38Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"5e2779cbe16febbdbd0382471494f20a\",\n                    \"kubernetes.io/config.mirror\": \"5e2779cbe16febbdbd0382471494f20a\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:34:52.252282121Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                        \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io --grpc-port=3996 --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:26Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:26Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:34:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:35:25Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                        \"containerID\": \"containerd://2b07c780c548b7c8a9b0affe2fef60c2236d3746f0ddfbf1c35be033c2fd4e32\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-js47d\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"91862b5f-e915-4710-9eb4-38aa351853ca\",\n                \"resourceVersion\": \"463\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"5579cddf45\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.22.0-beta.1\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"ddabc270-f2b0-4841-9ce7-4787619aa879\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-x8kxj\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-x8kxj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-37-129.us-west-2.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:22Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:22Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:20Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:36:20Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:36:21Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-beta.1\",\n                        \"imageID\": \"sha256:62287707a8723aba9b071745df906504f59d9c9a340a0224903ada29be5f0d91\",\n                        \"containerID\": \"containerd://776a15cf363ded387cfd5868c639ecb2b6d69ebca8b559c001242762a37369fa\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"34c94000-a786-4b25-95b1-36e5c89d7039\",\n                \"resourceVersion\": \"524\",\n                \"creationTimestamp\": \"2021-09-10T05:36:41Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                    \"kubectl.kubernetes.io/default-container\": \"kube-apiserver\",\n                    \"kubernetes.io/config.hash\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                    \"kubernetes.io/config.mirror\": \"543c03fb9e1b028c189eaf78f27df9c9\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:34:52.252283809Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                        \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubernetesca\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/ca.crt\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkapi\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-apiserver\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/srv/kubernetes/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/srv/kubernetes/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/srv/kubernetes/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kube-apiserver/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kube-apiserver/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/kube-apiserver/service-account.pub\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/kube-apiserver/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-apiserver/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-apiserver/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"kubernetesca\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/ca.crt\"\n                            },\n                            {\n                                \"name\": \"srvkapi\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:39Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:39Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:34:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:35:18Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-beta.1\",\n                        \"imageID\": \"sha256:d87f873ca639a672612d5da5e4d1a77910a7605c9c830937982f9bb05206d0c8\",\n                        \"containerID\": \"containerd://eaf425ca8862a2c9399cc542fcfbcf270d49addd41d733ccdd6564bb9cb5e46f\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:35:38Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-09-10T05:35:16Z\",\n                                \"finishedAt\": \"2021-09-10T05:35:37Z\",\n                                \"containerID\": \"containerd://aae9ce3bc60f8df7aaf4d7b7a448e5ad89eda94bad6c6ddf34617c1a8c1475a4\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\",\n                        \"imageID\": \"k8s.gcr.io/kube-apiserver-amd64@sha256:f29008c0c91003edb5e5d87c6e7242e31f7bb814af98c7b885e75aa96f5c37de\",\n                        \"containerID\": \"containerd://a265403ff21e2e88d1b67c37e3e8ba70a717dbbe0eb0d25cf7b8fe690b9c6846\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"54b8cddc-1bee-4827-81da-c75478a52cd9\",\n                \"resourceVersion\": \"762\",\n                \"creationTimestamp\": \"2021-09-10T05:37:35Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"8393b35378800da1b1f1ee218f8046ac\",\n                    \"kubernetes.io/config.mirror\": \"8393b35378800da1b1f1ee218f8046ac\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:34:52.252285287Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                        \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cabundle\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/ca.crt\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--authentication-kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--authorization-kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/kube-controller-manager/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/kube-controller-manager/ca.key\",\n                            \"--configure-cloud-routes=true\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/kube-controller-manager/service-account.key\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-controller-manager/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-controller-manager/server.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"cabundle\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/ca.crt\"\n                            },\n                            {\n                                \"name\": \"srvkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:34:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:36:05Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-09-10T05:35:36Z\",\n                                \"finishedAt\": \"2021-09-10T05:35:48Z\",\n                                \"containerID\": \"containerd://bf38e14eb2894970f87cc1ee112fe923beabdc589998bd9e25ed1880f66c56fd\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 2,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\",\n                        \"imageID\": \"k8s.gcr.io/kube-controller-manager-amd64@sha256:6980d349c4d2c4f0c41ca052ed5532f7b947f9ef0d59f0cefb2e4f99feff2070\",\n                        \"containerID\": \"containerd://eb21ed87ac45e9fd5b7530c4b529445ed4065bb8bfcd3bdf32709f77dc25e575\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-34-221.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"93948361-e926-419c-8eaf-28d94c4a0e4b\",\n                \"resourceVersion\": \"587\",\n                \"creationTimestamp\": \"2021-09-10T05:36:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"5157d36fb4fd5ea7e1313150e7c96388\",\n                    \"kubernetes.io/config.mirror\": \"5157d36fb4fd5ea7e1313150e7c96388\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:36:56.773655623Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-34-221.us-west-2.compute.internal\",\n                        \"uid\": \"48eb8181-5534-4afc-b70e-d97953bc8a4a\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-34-221.us-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-34-221.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.34.221\",\n                \"podIP\": \"172.20.34.221\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.34.221\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:36:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:36:59Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://3ef7f680fb6c3077e9f87cba9e668c29b03b3cdfd25fef2a52b41455159be085\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"10897421-2859-4957-b949-fab7ef87f281\",\n                \"resourceVersion\": \"472\",\n                \"creationTimestamp\": \"2021-09-10T05:36:20Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"366a695e3ffedf02807385d36463b1f3\",\n                    \"kubernetes.io/config.mirror\": \"366a695e3ffedf02807385d36463b1f3\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:34:52.252286694Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                        \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-37-129.us-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://127.0.0.1\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:18Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:18Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:34:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:35:16Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://ba815b48ff1c9a22f190402e23943617aedefb3eb4b9da935d9745e5a8307b57\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-37-76.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"59bf182f-0d76-4ffb-91aa-f443829422cc\",\n                \"resourceVersion\": \"557\",\n                \"creationTimestamp\": \"2021-09-10T05:36:52Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"1521b3575ab46785b71234cf5afcfb34\",\n                    \"kubernetes.io/config.mirror\": \"1521b3575ab46785b71234cf5afcfb34\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:36:51.203144087Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-76.us-west-2.compute.internal\",\n                        \"uid\": \"e0884f2d-6742-49c6-812e-d3f37a8934b8\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-37-76.us-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-76.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:51Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:54Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:54Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:51Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.76\",\n                \"podIP\": \"172.20.37.76\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.76\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:36:51Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:36:53Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://f185e22c17f72d3753d07fab643d2a97d7d9a64c44c489dcb449558b3e813da6\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-38-104.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f8e8351f-e09a-46df-8a07-349fb094b29a\",\n                \"resourceVersion\": \"608\",\n                \"creationTimestamp\": \"2021-09-10T05:37:00Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"9431b69ef3ca963b1d7e8e1626ff4504\",\n                    \"kubernetes.io/config.mirror\": \"9431b69ef3ca963b1d7e8e1626ff4504\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:36:59.791809152Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-38-104.us-west-2.compute.internal\",\n                        \"uid\": \"df456c33-cb9d-4931-b6d4-00f79370f631\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-38-104.us-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-38-104.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:02Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:02Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:37:00Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.38.104\",\n                \"podIP\": \"172.20.38.104\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.38.104\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:37:00Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:37:01Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://8d3cd78f9e6b11629245133b6cdd2a347b1389712889e812c8109256126ac793\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-ip-172-20-56-165.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"de12ce57-e99c-4f9d-8963-5761cf53184a\",\n                \"resourceVersion\": \"585\",\n                \"creationTimestamp\": \"2021-09-10T05:36:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"6cdecdbf2da30216f600379c43dbec48\",\n                    \"kubernetes.io/config.mirror\": \"6cdecdbf2da30216f600379c43dbec48\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:36:56.790673081Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-56-165.us-west-2.compute.internal\",\n                        \"uid\": \"ef4937ab-b3eb-4f0e-acb5-3043de5c4632\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-proxy.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-proxy/kubeconfig\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"ssl-certs-hosts\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"iptableslock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\"\n                        ],\n                        \"args\": [\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--conntrack-max-per-core=131072\",\n                            \"--hostname-override=ip-172-20-56-165.us-west-2.compute.internal\",\n                            \"--kubeconfig=/var/lib/kube-proxy/kubeconfig\",\n                            \"--master=https://api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\",\n                            \"--oom-score-adj=-998\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-proxy.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-proxy.log\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-proxy/kubeconfig\"\n                            },\n                            {\n                                \"name\": \"modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"ssl-certs-hosts\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"iptableslock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-56-165.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:59Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:59Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:36:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.165\",\n                \"podIP\": \"172.20.56.165\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.165\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:36:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:36:59Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:ef4bce0a7569b4fa83a559717c608c076a2c9d30361eb059ea4e1b7a55424d68\",\n                        \"containerID\": \"containerd://d67cee4e52f733f47c3f22a07831828c3016661211c966705686a93a3a232395\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-37-129.us-west-2.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9e77764a-f09a-427c-9495-cd8749387f0c\",\n                \"resourceVersion\": \"521\",\n                \"creationTimestamp\": \"2021-09-10T05:36:37Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"6c8477f7e2b4b40106c022734372d908\",\n                    \"kubernetes.io/config.mirror\": \"6c8477f7e2b4b40106c022734372d908\",\n                    \"kubernetes.io/config.seen\": \"2021-09-10T05:34:52.252241756Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                        \"uid\": \"2e6b63be-8000-41f1-a765-bdf5cceacb43\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvscheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--authentication-kubeconfig=/var/lib/kube-scheduler/kubeconfig\",\n                            \"--authorization-kubeconfig=/var/lib/kube-scheduler/kubeconfig\",\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--tls-cert-file=/srv/kubernetes/kube-scheduler/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/kube-scheduler/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"srvscheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-37-129.us-west-2.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:17Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:35:17Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-09-10T05:34:52Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.129\",\n                \"podIP\": \"172.20.37.129\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.129\"\n                    }\n                ],\n                \"startTime\": \"2021-09-10T05:34:52Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-09-10T05:35:16Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\",\n                        \"imageID\": \"sha256:993d3ec13feb2e7b7e9bd6ac4831fb0cdae7329a8e8f1e285d9f2790004b2fe3\",\n                        \"containerID\": \"containerd://3b402048951dd063dde552b596b40c0d8b1b4dc54d1fd8c39abfe21643fe3a37\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-26bxp ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-26bxp ====\n==== START logs for container coredns of pod kube-system/coredns-5dc785954d-xvz58 ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 9e3e34ac93d9bb69126337d32f1195e3\nCoreDNS-1.8.4\nlinux/amd64, go1.16.4, 053c4d5\n==== END logs for container coredns of pod kube-system/coredns-5dc785954d-xvz58 ====\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-d4f4h ====\nI0910 05:37:12.589435       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI0910 05:37:12.843384       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI0910 05:37:12.846066       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI0910 05:37:12.846097       1 plugin.go:50] Set control mode to linear\nI0910 05:37:12.846103       1 linear_controller.go:60] ConfigMap version change (old:  new: 649) - rebuilding params\nI0910 05:37:12.846107       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI0910 05:37:12.846288       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI0910 05:37:12.848403       1 k8sclient.go:272] Cluster status: SchedulableNodes[5], SchedulableCores[10]\nI0910 05:37:12.848419       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-84d4cfd89c-d4f4h ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-59b7d7865d-82ctf ====\ndns-controller version 0.1\nI0910 05:36:21.375586       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI0910 05:36:21.375626       1 main.go:223] Ingress controller disabled\nI0910 05:36:21.375986       1 pod.go:60] starting pod controller\nI0910 05:36:21.375999       1 dnscontroller.go:108] starting DNS controller\nI0910 05:36:21.376033       1 dnscontroller.go:170] scope not yet ready: node\nI0910 05:36:21.376080       1 service.go:60] starting service controller\nI0910 05:36:21.376342       1 node.go:60] starting node controller\nI0910 05:36:21.403234       1 dnscontroller.go:625] Update desired state: node/ip-172-20-37-129.us-west-2.compute.internal: [{A node/ip-172-20-37-129.us-west-2.compute.internal/internal 172.20.37.129 true} {A node/ip-172-20-37-129.us-west-2.compute.internal/external 34.215.112.57 true} {A node/role=master/internal 172.20.37.129 true} {A node/role=master/external 34.215.112.57 true} {A node/role=master/ ip-172-20-37-129.us-west-2.compute.internal true} {A node/role=master/ ip-172-20-37-129.us-west-2.compute.internal true} {A node/role=master/ ec2-34-215-112-57.us-west-2.compute.amazonaws.com true}]\nI0910 05:36:21.418773       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-js47d: [{A kops-controller.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io. 172.20.37.129 false}]\nI0910 05:36:26.377071       1 dnscache.go:74] querying all DNS zones (no cached results)\nI0910 05:36:26.855643       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0910 05:36:26.855673       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0910 05:36:28.518043       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io.} [172.20.37.129]\nI0910 05:36:28.518085       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0910 05:36:41.603777       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal: [{_alias api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io. node/ip-172-20-37-129.us-west-2.compute.internal/external false}]\nI0910 05:36:42.627289       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-37-129.us-west-2.compute.internal: [{_alias api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io. node/ip-172-20-37-129.us-west-2.compute.internal/external false} {A api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io. 172.20.37.129 false}]\nI0910 05:36:43.778619       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0910 05:36:43.778650       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0910 05:36:45.727936       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io.} [34.215.112.57]\nI0910 05:36:45.727967       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0910 05:36:45.728363       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io.} [172.20.37.129]\nI0910 05:36:45.728390       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0910 05:36:51.350903       1 dnscontroller.go:625] Update desired state: node/ip-172-20-37-76.us-west-2.compute.internal: [{A node/ip-172-20-37-76.us-west-2.compute.internal/internal 172.20.37.76 true} {A node/ip-172-20-37-76.us-west-2.compute.internal/external 34.220.213.19 true} {A node/role=node/internal 172.20.37.76 true} {A node/role=node/external 34.220.213.19 true} {A node/role=node/ ip-172-20-37-76.us-west-2.compute.internal true} {A node/role=node/ ip-172-20-37-76.us-west-2.compute.internal true} {A node/role=node/ ec2-34-220-213-19.us-west-2.compute.amazonaws.com true}]\nI0910 05:36:56.911293       1 dnscontroller.go:625] Update desired state: node/ip-172-20-34-221.us-west-2.compute.internal: [{A node/ip-172-20-34-221.us-west-2.compute.internal/internal 172.20.34.221 true} {A node/ip-172-20-34-221.us-west-2.compute.internal/external 34.220.79.145 true} {A node/role=node/internal 172.20.34.221 true} {A node/role=node/external 34.220.79.145 true} {A node/role=node/ ip-172-20-34-221.us-west-2.compute.internal true} {A node/role=node/ ip-172-20-34-221.us-west-2.compute.internal true} {A node/role=node/ ec2-34-220-79-145.us-west-2.compute.amazonaws.com true}]\nI0910 05:36:56.960828       1 dnscontroller.go:625] Update desired state: node/ip-172-20-56-165.us-west-2.compute.internal: [{A node/ip-172-20-56-165.us-west-2.compute.internal/internal 172.20.56.165 true} {A node/ip-172-20-56-165.us-west-2.compute.internal/external 54.185.177.200 true} {A node/role=node/internal 172.20.56.165 true} {A node/role=node/external 54.185.177.200 true} {A node/role=node/ ip-172-20-56-165.us-west-2.compute.internal true} {A node/role=node/ ip-172-20-56-165.us-west-2.compute.internal true} {A node/role=node/ ec2-54-185-177-200.us-west-2.compute.amazonaws.com true}]\nI0910 05:37:00.073062       1 dnscontroller.go:625] Update desired state: node/ip-172-20-38-104.us-west-2.compute.internal: [{A node/ip-172-20-38-104.us-west-2.compute.internal/internal 172.20.38.104 true} {A node/ip-172-20-38-104.us-west-2.compute.internal/external 35.167.199.191 true} {A node/role=node/internal 172.20.38.104 true} {A node/role=node/external 35.167.199.191 true} {A node/role=node/ ip-172-20-38-104.us-west-2.compute.internal true} {A node/role=node/ ip-172-20-38-104.us-west-2.compute.internal true} {A node/role=node/ ec2-35-167-199-191.us-west-2.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-59b7d7865d-82ctf ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal ====\netcd-manager\nI0910 05:35:24.619398    5124 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0910 05:35:24.620446    5124 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0910 05:35:24.621104    5124 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0910 05:35:24.621593    5124 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0910 05:35:24.622278    5124 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0910 05:35:24.622771    5124 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI0910 05:35:24.624529    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:35:24.780916    5124 mounter.go:304] Trying to mount master volume: \"vol-088b2024b45b7a38d\"\nI0910 05:35:24.780936    5124 volumes.go:331] Trying to attach volume \"vol-088b2024b45b7a38d\" at \"/dev/xvdu\"\nI0910 05:35:24.781068    5124 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0910 05:35:25.205933    5124 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-09-10 05:35:25.192 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-02f21e556c94dcd7c\",\n  State: \"attaching\",\n  VolumeId: \"vol-088b2024b45b7a38d\"\n}\nI0910 05:35:25.206085    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:35:25.347895    5124 mounter.go:318] Currently attached volumes: [0xc0003be000]\nI0910 05:35:25.347914    5124 mounter.go:72] Master volume \"vol-088b2024b45b7a38d\" is attached at \"/dev/xvdu\"\nI0910 05:35:25.348673    5124 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-088b2024b45b7a38d\nI0910 05:35:25.348696    5124 volumes.go:234] volume vol-088b2024b45b7a38d not mounted at /rootfs/dev/xvdu\nI0910 05:35:25.348708    5124 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol088b2024b45b7a38d\"\nI0910 05:35:25.348714    5124 volumes.go:251] volume vol-088b2024b45b7a38d not mounted at nvme-Amazon_Elastic_Block_Store_vol088b2024b45b7a38d\nI0910 05:35:25.348717    5124 mounter.go:121] Waiting for volume \"vol-088b2024b45b7a38d\" to be mounted\nI0910 05:35:26.348822    5124 volumes.go:234] volume vol-088b2024b45b7a38d not mounted at /rootfs/dev/xvdu\nI0910 05:35:26.348870    5124 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol088b2024b45b7a38d\" at \"/dev/nvme1n1\"\nI0910 05:35:26.348883    5124 mounter.go:125] Found volume \"vol-088b2024b45b7a38d\" mounted at device \"/dev/nvme1n1\"\nI0910 05:35:26.349530    5124 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-088b2024b45b7a38d\"\nI0910 05:35:26.349617    5124 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-088b2024b45b7a38d\"\nI0910 05:35:26.349651    5124 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0910 05:35:26.349684    5124 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0910 05:35:26.369359    5124 mount_linux.go:449] Output: \"\"\nI0910 05:35:26.369390    5124 mount_linux.go:408] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI0910 05:35:26.369410    5124 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI0910 05:35:26.618594    5124 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-088b2024b45b7a38d\nI0910 05:35:26.618614    5124 mount_linux.go:436] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-088b2024b45b7a38d\nI0910 05:35:26.618632    5124 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-088b2024b45b7a38d ext4 [defaults]\nI0910 05:35:26.618656    5124 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-088b2024b45b7a38d --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-088b2024b45b7a38d]\nI0910 05:35:26.638304    5124 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-088b2024b45b7a38d: Running scope as unit: run-r4226ccbd3dab430a9b35513927b6f9c4.scope\nI0910 05:35:26.638329    5124 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0910 05:35:26.638354    5124 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0910 05:35:26.651113    5124 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI0910 05:35:26.651132    5124 resizefs_linux.go:55] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI0910 05:35:26.651144    5124 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI0910 05:35:26.654756    5124 resizefs_linux.go:70] Device /dev/nvme1n1 resized successfully\nI0910 05:35:26.667298    5124 mount_linux.go:206] Detected OS with systemd\nI0910 05:35:26.668755    5124 mounter.go:224] mounting inside container: /rootfs/dev/nvme1n1 -> /rootfs/mnt/master-vol-088b2024b45b7a38d\nI0910 05:35:26.668770    5124 mount_linux.go:175] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /rootfs/mnt/master-vol-088b2024b45b7a38d --scope -- mount  /rootfs/dev/nvme1n1 /rootfs/mnt/master-vol-088b2024b45b7a38d)\nI0910 05:35:26.697956    5124 mounter.go:94] mounted master volume \"vol-088b2024b45b7a38d\" on /mnt/master-vol-088b2024b45b7a38d\nI0910 05:35:26.697986    5124 main.go:320] discovered IP address: 172.20.37.129\nI0910 05:35:26.697991    5124 main.go:325] Setting data dir to /rootfs/mnt/master-vol-088b2024b45b7a38d\nI0910 05:35:26.796854    5124 certs.go:211] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI0910 05:35:27.100347    5124 certs.go:211] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI0910 05:35:27.105611    5124 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI0910 05:35:27.106214    5124 main.go:473] peerClientIPs: [172.20.37.129]\nI0910 05:35:27.258665    5124 certs.go:211] generating certificate for \"etcd-manager-etcd-events-a\"\nI0910 05:35:27.260573    5124 server.go:105] GRPC server listening on \"172.20.37.129:3997\"\nI0910 05:35:27.260819    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:35:27.429010    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:35:27.498969    5124 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.37.129 0} {172.20.37.129 0}]\nI0910 05:35:27.499016    5124 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:27.499176    5124 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI0910 05:35:29.261530    5124 controller.go:187] starting controller iteration\nI0910 05:35:29.261927    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:35:29.262129    5124 commands.go:41] refreshing commands\nI0910 05:35:29.262293    5124 s3context.go:334] product_uuid is \"ec23a353-0d6c-5600-b380-62777f3e55ab\", assuming running on EC2\nI0910 05:35:29.264173    5124 s3context.go:166] got region from metadata: \"us-west-2\"\nI0910 05:35:29.294212    5124 s3context.go:213] found bucket in region \"us-west-1\"\nI0910 05:35:29.486894    5124 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0910 05:35:29.486917    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0910 05:35:39.526963    5124 controller.go:187] starting controller iteration\nI0910 05:35:39.527021    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:39.527363    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:35:39.527534    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:39.527804    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > }\nI0910 05:35:39.527864    5124 controller.go:301] etcd cluster members: map[]\nI0910 05:35:39.527876    5124 controller.go:639] sending member map to all peers: \nI0910 05:35:39.528089    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:39.528107    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:35:39.639234    5124 controller.go:357] detected that there is no existing cluster\nI0910 05:35:39.639250    5124 commands.go:41] refreshing commands\nI0910 05:35:39.677934    5124 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0910 05:35:39.677954    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0910 05:35:39.725558    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:39.725933    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:39.726080    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:39.726171    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:39.726337    5124 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > }]\nI0910 05:35:39.727523    5124 newcluster.go:153] JoinClusterResponse: \nI0910 05:35:39.728326    5124 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0910 05:35:39.728370    5124 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\nI0910 05:35:39.729019    5124 pki.go:58] adding peerClientIPs [172.20.37.129]\nI0910 05:35:39.729043    5124 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[172.20.37.129 127.0.0.1]} Usages:[2 1]}\nI0910 05:35:39.832861    5124 certs.go:211] generating certificate for \"etcd-events-a\"\nI0910 05:35:39.835140    5124 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0910 05:35:40.008702    5124 certs.go:211] generating certificate for \"etcd-events-a\"\nI0910 05:35:40.144390    5124 certs.go:211] generating certificate for \"etcd-events-a\"\nI0910 05:35:40.146423    5124 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0910 05:35:40.147201    5124 newcluster.go:171] JoinClusterResponse: \nI0910 05:35:40.147278    5124 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0910 05:35:40.147306    5124 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-09-10 05:35:40.154292 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\n2021-09-10 05:35:40.154325 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.crt\n2021-09-10 05:35:40.154332 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:40.154342 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\n2021-09-10 05:35:40.154357 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-10 05:35:40.154386 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\n2021-09-10 05:35:40.154391 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\n2021-09-10 05:35:40.154395 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-09-10 05:35:40.154402 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=2WocR1fQSw54vUPtu7cp-w\n2021-09-10 05:35:40.154408 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.key\n2021-09-10 05:35:40.154416 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-09-10 05:35:40.154426 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-09-10 05:35:40.154432 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-10 05:35:40.154440 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-10 05:35:40.154450 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-09-10 05:35:40.154457 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.crt\n2021-09-10 05:35:40.154461 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:40.154467 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.key\n2021-09-10 05:35:40.154472 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/ca.crt\n2021-09-10 05:35:40.154492 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/ca.crt\n2021-09-10 05:35:40.154499 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.154Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.154Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.crt, key = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.155Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.155Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"2WocR1fQSw54vUPtu7cp-w\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.160Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w/member/snap/db\",\"took\":\"4.035435ms\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.160Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.37.129:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.160Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.37.129:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.165Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"cluster-id\":\"a3e8b35e5eb17923\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.165Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"32f4d0aca6cae1e1 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.165Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"32f4d0aca6cae1e1 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.166Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 32f4d0aca6cae1e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.166Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"32f4d0aca6cae1e1 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.166Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"32f4d0aca6cae1e1 switched to configuration voters=(3671789036165063137)\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-10T05:35:40.169Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.174Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.176Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.177Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.177Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"32f4d0aca6cae1e1 switched to configuration voters=(3671789036165063137)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.178Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"a3e8b35e5eb17923\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"added-peer-id\":\"32f4d0aca6cae1e1\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.178Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.crt, key = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.178Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.178Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\nI0910 05:35:40.217969    5124 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:35:40.264095    5124 controller.go:187] starting controller iteration\nI0910 05:35:40.264121    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:40.264405    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:35:40.264560    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:40.264985    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995]\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"32f4d0aca6cae1e1 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"32f4d0aca6cae1e1 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"32f4d0aca6cae1e1 received MsgVoteResp from 32f4d0aca6cae1e1 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"32f4d0aca6cae1e1 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 32f4d0aca6cae1e1 elected leader 32f4d0aca6cae1e1 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.366Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/32f4d0aca6cae1e1/attributes\",\"cluster-id\":\"a3e8b35e5eb17923\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.367Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"a3e8b35e5eb17923\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.367Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.367Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.368Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\nI0910 05:35:40.390895    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0910 05:35:40.391041    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:35:40.391074    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:40.391463    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:40.391611    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:40.391692    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:40.391809    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:40.391886    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:35:40.426520    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:35:40.427209    5124 backup.go:128] performing snapshot save to /tmp/279436328/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.432Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.435Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.435Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.435Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:40.436Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI0910 05:35:40.438280    5124 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/2021-09-10T05:35:40Z-000001/etcd.backup.gz\"\nI0910 05:35:40.486331    5124 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/2021-09-10T05:35:40Z-000001/_etcd_backup.meta\"\nI0910 05:35:40.529756    5124 backup.go:153] backup complete: name:\"2021-09-10T05:35:40Z-000001\" \nI0910 05:35:40.530413    5124 controller.go:935] backup response: name:\"2021-09-10T05:35:40Z-000001\" \nI0910 05:35:40.530449    5124 controller.go:574] took backup: name:\"2021-09-10T05:35:40Z-000001\" \nI0910 05:35:40.572046    5124 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events: [2021-09-10T05:35:40Z-000001]\nI0910 05:35:40.572070    5124 cleanup.go:166] retaining backup \"2021-09-10T05:35:40Z-000001\"\nI0910 05:35:40.572097    5124 restore.go:98] Setting quarantined state to false\nI0910 05:35:40.572428    5124 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" cluster_name:\"etcd-events\" > \nI0910 05:35:40.572476    5124 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" cluster_name:\"etcd-events\" > \nI0910 05:35:40.572488    5124 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\nI0910 05:35:40.572617    5124 etcdprocess.go:131] Waiting for etcd to exit\nI0910 05:35:40.672868    5124 etcdprocess.go:131] Waiting for etcd to exit\nI0910 05:35:40.673052    5124 etcdprocess.go:136] Exited etcd: signal: killed\nI0910 05:35:40.673259    5124 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0910 05:35:40.673538    5124 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0910 05:35:40.673645    5124 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0910 05:35:40.673785    5124 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\nI0910 05:35:40.674103    5124 pki.go:58] adding peerClientIPs [172.20.37.129]\nI0910 05:35:40.674260    5124 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[172.20.37.129 127.0.0.1]} Usages:[2 1]}\nI0910 05:35:40.674708    5124 certs.go:151] existing certificate not valid after 2023-09-10T05:35:39Z; will regenerate\nI0910 05:35:40.674857    5124 certs.go:211] generating certificate for \"etcd-events-a\"\nI0910 05:35:40.678582    5124 pki.go:108] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0910 05:35:40.678904    5124 certs.go:151] existing certificate not valid after 2023-09-10T05:35:40Z; will regenerate\nI0910 05:35:40.679012    5124 certs.go:211] generating certificate for \"etcd-events-a\"\nI0910 05:35:41.185995    5124 certs.go:211] generating certificate for \"etcd-events-a\"\nI0910 05:35:41.189171    5124 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0910 05:35:41.190326    5124 restore.go:116] ReconfigureResponse: \nI0910 05:35:41.193494    5124 controller.go:187] starting controller iteration\nI0910 05:35:41.193516    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:41.193772    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:35:41.193904    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:41.194359    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\n2021-09-10 05:35:41.198010 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\n2021-09-10 05:35:41.198208 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.crt\n2021-09-10 05:35:41.198329 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:41.198419 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\n2021-09-10 05:35:41.198521 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-10 05:35:41.198628 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\n2021-09-10 05:35:41.198715 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\n2021-09-10 05:35:41.198810 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-09-10 05:35:41.198891 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=2WocR1fQSw54vUPtu7cp-w\n2021-09-10 05:35:41.198993 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.key\n2021-09-10 05:35:41.199073 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4002\n2021-09-10 05:35:41.199165 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-09-10 05:35:41.199242 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-10 05:35:41.199333 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-10 05:35:41.199413 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-09-10 05:35:41.199502 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.crt\n2021-09-10 05:35:41.199584 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:41.199673 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.key\n2021-09-10 05:35:41.199747 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/ca.crt\n2021-09-10 05:35:41.199847 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/ca.crt\n2021-09-10 05:35:41.199923 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.200Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.200Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.200Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.crt, key = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.201Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4002\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.201Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.202Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-088b2024b45b7a38d/data/2WocR1fQSw54vUPtu7cp-w/member/snap/db\",\"took\":\"131.059µs\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.203Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"a3e8b35e5eb17923\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.203Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"32f4d0aca6cae1e1 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.203Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"32f4d0aca6cae1e1 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.204Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 32f4d0aca6cae1e1 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-10T05:35:41.205Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.207Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.209Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.209Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.210Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"32f4d0aca6cae1e1 switched to configuration voters=(3671789036165063137)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.210Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"a3e8b35e5eb17923\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"added-peer-id\":\"32f4d0aca6cae1e1\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.211Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"a3e8b35e5eb17923\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.211Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.218Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.crt, key = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-088b2024b45b7a38d/pki/2WocR1fQSw54vUPtu7cp-w/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.218Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"listen-client-urls\":[\"https://0.0.0.0:4002\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.218Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.204Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"32f4d0aca6cae1e1 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.204Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"32f4d0aca6cae1e1 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.204Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"32f4d0aca6cae1e1 received MsgVoteResp from 32f4d0aca6cae1e1 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.204Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"32f4d0aca6cae1e1 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.204Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 32f4d0aca6cae1e1 elected leader 32f4d0aca6cae1e1 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.207Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"32f4d0aca6cae1e1\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]}\",\"request-path\":\"/0/members/32f4d0aca6cae1e1/attributes\",\"cluster-id\":\"a3e8b35e5eb17923\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.209Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4002\"}\nI0910 05:35:42.226328    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:35:42.226443    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:35:42.226464    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:42.226647    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:42.226662    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:42.226719    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:42.226813    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:42.226826    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:35:42.262746    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:35:42.262821    5124 controller.go:555] controller loop complete\nI0910 05:35:52.265682    5124 controller.go:187] starting controller iteration\nI0910 05:35:52.265715    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:52.266089    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:35:52.266217    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:35:52.267340    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:35:52.293802    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:35:52.294174    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:35:52.294423    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:52.294844    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:52.295373    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:52.295568    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:52.295792    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:52.295936    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:35:52.409782    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:35:52.409870    5124 controller.go:555] controller loop complete\nI0910 05:36:02.411334    5124 controller.go:187] starting controller iteration\nI0910 05:36:02.411368    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:02.411644    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:36:02.411826    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:02.412553    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:36:02.424660    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:36:02.424771    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:36:02.424791    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:36:02.424957    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:02.424997    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:02.425082    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:02.425204    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:36:02.425239    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:36:02.536210    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:36:02.536293    5124 controller.go:555] controller loop complete\nI0910 05:36:12.537594    5124 controller.go:187] starting controller iteration\nI0910 05:36:12.537633    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:12.538066    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:36:12.538296    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:12.539391    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:36:12.553953    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:36:12.554029    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:36:12.554045    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:36:12.554374    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:12.554395    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:12.554466    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:12.554612    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:36:12.554631    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:36:12.671547    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:36:12.671668    5124 controller.go:555] controller loop complete\nI0910 05:36:22.673521    5124 controller.go:187] starting controller iteration\nI0910 05:36:22.673552    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:22.673903    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:36:22.674069    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:22.674538    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:36:22.688925    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:36:22.689027    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:36:22.689043    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:36:22.689269    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:22.689287    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:22.689350    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:22.689445    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:36:22.689482    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:36:22.801188    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:36:22.801265    5124 controller.go:555] controller loop complete\nI0910 05:36:27.502644    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:36:27.585782    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:36:27.630490    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:27.634842    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:32.802575    5124 controller.go:187] starting controller iteration\nI0910 05:36:32.802604    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:32.802895    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:36:32.803086    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:32.803700    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:36:32.816273    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:36:32.816588    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:36:32.816619    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:36:32.816913    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:32.816930    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:32.817088    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:32.817309    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:36:32.817325    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:36:32.933406    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:36:32.933513    5124 controller.go:555] controller loop complete\nI0910 05:36:42.934700    5124 controller.go:187] starting controller iteration\nI0910 05:36:42.934731    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:42.935149    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:36:42.935364    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:42.936064    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:36:42.951070    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:36:42.951153    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:36:42.951167    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:36:42.951376    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:42.951394    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:42.951453    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:42.951534    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:36:42.951551    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:36:43.062346    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:36:43.062418    5124 controller.go:555] controller loop complete\nI0910 05:36:53.063608    5124 controller.go:187] starting controller iteration\nI0910 05:36:53.063639    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:53.064130    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:36:53.064367    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:36:53.064898    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:36:53.079572    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:36:53.079659    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:36:53.079736    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:36:53.080066    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:53.080083    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:36:53.080215    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:36:53.080308    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:36:53.080319    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:36:53.197561    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:36:53.197690    5124 controller.go:555] controller loop complete\nI0910 05:37:03.198901    5124 controller.go:187] starting controller iteration\nI0910 05:37:03.198932    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:03.199362    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:37:03.199595    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:03.200304    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:37:03.213693    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:37:03.213790    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:37:03.213809    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:37:03.214305    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:03.214326    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:03.214380    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:03.214463    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:37:03.214473    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:37:03.322862    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:37:03.322934    5124 controller.go:555] controller loop complete\nI0910 05:37:13.324172    5124 controller.go:187] starting controller iteration\nI0910 05:37:13.324200    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:13.324557    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:37:13.324753    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:13.326628    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:37:13.340364    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:37:13.340442    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:37:13.340652    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:37:13.340928    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:13.340948    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:13.341187    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:13.341316    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:37:13.341358    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:37:13.462003    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:37:13.462183    5124 controller.go:555] controller loop complete\nI0910 05:37:23.463980    5124 controller.go:187] starting controller iteration\nI0910 05:37:23.464010    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:23.464468    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:37:23.464696    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:23.465140    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:37:23.479457    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:37:23.479776    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:37:23.479811    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:37:23.480194    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:23.480214    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:23.480423    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:23.480593    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:37:23.480689    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:37:23.601690    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:37:23.601877    5124 controller.go:555] controller loop complete\nI0910 05:37:27.635304    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:37:27.703015    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:37:27.767871    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:27.768212    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:33.603090    5124 controller.go:187] starting controller iteration\nI0910 05:37:33.603122    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:33.603522    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:37:33.603810    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:33.604341    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:37:33.616899    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:37:33.616986    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:37:33.617003    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:37:33.617311    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:33.617329    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:33.617404    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:33.617552    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:37:33.617569    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:37:33.726614    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:37:33.726690    5124 controller.go:555] controller loop complete\nI0910 05:37:43.728163    5124 controller.go:187] starting controller iteration\nI0910 05:37:43.728195    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:43.728574    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:37:43.728756    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:43.729556    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:37:43.747979    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:37:43.748279    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:37:43.748310    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:37:43.748640    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:43.748692    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:43.748808    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:43.748955    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:37:43.748973    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:37:43.860658    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:37:43.860741    5124 controller.go:555] controller loop complete\nI0910 05:37:53.862026    5124 controller.go:187] starting controller iteration\nI0910 05:37:53.862116    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:53.862505    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:37:53.862676    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:37:53.863195    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:37:53.875439    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:37:53.875520    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:37:53.875720    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:37:53.876028    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:53.876044    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:37:53.876271    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:37:53.876411    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:37:53.876497    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:37:53.999507    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:37:53.999647    5124 controller.go:555] controller loop complete\nI0910 05:38:04.001217    5124 controller.go:187] starting controller iteration\nI0910 05:38:04.001249    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:04.001764    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:38:04.001973    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:04.002415    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:38:04.020495    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:38:04.020639    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:38:04.020675    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:38:04.020941    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:04.020992    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:04.021077    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:04.021192    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:38:04.021228    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:38:04.135122    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:38:04.135196    5124 controller.go:555] controller loop complete\nI0910 05:38:14.136822    5124 controller.go:187] starting controller iteration\nI0910 05:38:14.136907    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:14.137321    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:38:14.137536    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:14.138053    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:38:14.149829    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:38:14.149908    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:38:14.149926    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:38:14.150372    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:14.150442    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:14.150550    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:14.150721    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:38:14.150793    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:38:14.265977    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:38:14.266156    5124 controller.go:555] controller loop complete\nI0910 05:38:24.268128    5124 controller.go:187] starting controller iteration\nI0910 05:38:24.268158    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:24.268589    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:38:24.268818    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:24.269365    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:38:24.284059    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:38:24.284275    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:38:24.284501    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:38:24.284729    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:24.284759    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:24.284836    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:24.284972    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:38:24.284987    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:38:24.413475    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:38:24.413548    5124 controller.go:555] controller loop complete\nI0910 05:38:27.769210    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:38:27.910380    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:38:27.955113    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:27.955357    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:34.415080    5124 controller.go:187] starting controller iteration\nI0910 05:38:34.415113    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:34.415569    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:38:34.415846    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:34.416425    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:38:34.428710    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:38:34.428788    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:38:34.428809    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:38:34.429034    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:34.429049    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:34.429108    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:34.429199    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:38:34.429212    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:38:34.548551    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:38:34.548624    5124 controller.go:555] controller loop complete\nI0910 05:38:44.550694    5124 controller.go:187] starting controller iteration\nI0910 05:38:44.550726    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:44.551194    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:38:44.551428    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:44.551981    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:38:44.566815    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:38:44.567037    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:38:44.567120    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:38:44.567399    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:44.567438    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:44.567550    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:44.567685    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:38:44.567717    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:38:44.679059    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:38:44.679198    5124 controller.go:555] controller loop complete\nI0910 05:38:54.681265    5124 controller.go:187] starting controller iteration\nI0910 05:38:54.681296    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:54.681614    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:38:54.681826    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:38:54.682301    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:38:54.694547    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:38:54.694632    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:38:54.694647    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:38:54.694866    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:54.694880    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:38:54.694940    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:38:54.695028    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:38:54.695042    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:38:54.812417    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:38:54.812492    5124 controller.go:555] controller loop complete\nI0910 05:39:04.814088    5124 controller.go:187] starting controller iteration\nI0910 05:39:04.814129    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:04.814499    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:39:04.814681    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:04.815259    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:39:04.828306    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:39:04.828389    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:39:04.828405    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:39:04.828906    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:04.828924    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:04.828981    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:04.829073    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:39:04.829087    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:39:04.944981    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:39:04.945055    5124 controller.go:555] controller loop complete\nI0910 05:39:14.946256    5124 controller.go:187] starting controller iteration\nI0910 05:39:14.946286    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:14.946753    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:39:14.947007    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:14.948149    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:39:14.962017    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:39:14.962103    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:39:14.962392    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:39:14.962637    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:14.962776    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:14.962840    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:14.962956    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:39:14.962969    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:39:15.081121    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:39:15.081278    5124 controller.go:555] controller loop complete\nI0910 05:39:25.083208    5124 controller.go:187] starting controller iteration\nI0910 05:39:25.083380    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:25.083753    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:39:25.083991    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:25.084547    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:39:25.097115    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:39:25.097486    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:39:25.097515    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:39:25.097818    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:25.097836    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:25.097995    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:25.098175    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:39:25.098264    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:39:25.210840    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:39:25.211051    5124 controller.go:555] controller loop complete\nI0910 05:39:27.956232    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:39:28.022223    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:39:28.092897    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:28.093121    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:35.215712    5124 controller.go:187] starting controller iteration\nI0910 05:39:35.215751    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:35.216351    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:39:35.216709    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:35.218018    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:39:35.235210    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:39:35.235293    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:39:35.235312    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:39:35.235705    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:35.235723    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:35.235798    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:35.235934    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:39:35.235954    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:39:35.347449    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:39:35.347542    5124 controller.go:555] controller loop complete\nI0910 05:39:45.349119    5124 controller.go:187] starting controller iteration\nI0910 05:39:45.349151    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:45.350172    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:39:45.357352    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:45.358218    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:39:45.372891    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:39:45.372973    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:39:45.372989    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:39:45.373339    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:45.373358    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:45.373433    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:45.373602    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:39:45.373614    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:39:45.484335    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:39:45.484466    5124 controller.go:555] controller loop complete\nI0910 05:39:55.488093    5124 controller.go:187] starting controller iteration\nI0910 05:39:55.488237    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:55.488606    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:39:55.488799    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:39:55.489218    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:39:55.510414    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:39:55.510499    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:39:55.510516    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:39:55.510774    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:55.510789    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:39:55.510857    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:39:55.510956    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:39:55.510968    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:39:55.619395    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:39:55.619510    5124 controller.go:555] controller loop complete\nI0910 05:40:05.620980    5124 controller.go:187] starting controller iteration\nI0910 05:40:05.621026    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:05.621268    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:40:05.621403    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:05.622125    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:40:05.646477    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:40:05.646569    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:40:05.646586    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:40:05.647009    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:05.647028    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:05.647113    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:05.647244    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:40:05.647307    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:40:05.759036    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:40:05.759110    5124 controller.go:555] controller loop complete\nI0910 05:40:15.760648    5124 controller.go:187] starting controller iteration\nI0910 05:40:15.760681    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:15.760929    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:40:15.761262    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:15.763450    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:40:15.798916    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:40:15.799025    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:40:15.799045    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:40:15.799320    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:15.799334    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:15.799388    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:15.799462    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:40:15.799473    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:40:15.914032    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:40:15.914106    5124 controller.go:555] controller loop complete\nI0910 05:40:25.915640    5124 controller.go:187] starting controller iteration\nI0910 05:40:25.915672    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:25.916150    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:40:25.916372    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:25.916884    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:40:25.930061    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:40:25.930152    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:40:25.930381    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:40:25.930639    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:25.930657    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:25.930763    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:25.930923    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:40:25.930983    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:40:26.049037    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:40:26.049131    5124 controller.go:555] controller loop complete\nI0910 05:40:28.093814    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:40:28.229865    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:40:28.276210    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:28.276311    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:36.050591    5124 controller.go:187] starting controller iteration\nI0910 05:40:36.050636    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:36.051056    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:40:36.051348    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:36.052875    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:40:36.083518    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:40:36.083656    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:40:36.085328    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:40:36.085793    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:36.085878    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:36.086060    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:36.087157    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:40:36.087247    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:40:36.214094    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:40:36.214175    5124 controller.go:555] controller loop complete\nI0910 05:40:46.216168    5124 controller.go:187] starting controller iteration\nI0910 05:40:46.216549    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:46.216848    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:40:46.216997    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:46.217843    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:40:46.259659    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:40:46.259767    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:40:46.259788    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:40:46.260010    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:46.260025    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:46.260081    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:46.263116    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:40:46.263133    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:40:46.377175    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:40:46.377437    5124 controller.go:555] controller loop complete\nI0910 05:40:56.379623    5124 controller.go:187] starting controller iteration\nI0910 05:40:56.379652    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:56.380073    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:40:56.380306    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:40:56.381400    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:40:56.402767    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:40:56.402872    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:40:56.402892    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:40:56.403122    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:56.403138    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:40:56.403206    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:40:56.403311    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:40:56.403323    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:40:56.527787    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:40:56.528002    5124 controller.go:555] controller loop complete\nI0910 05:41:06.529811    5124 controller.go:187] starting controller iteration\nI0910 05:41:06.529841    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:06.530320    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:41:06.530630    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:06.531108    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:41:06.557619    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:41:06.557699    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:41:06.557723    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:41:06.558039    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:06.558176    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:06.558722    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:41:06.559377    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:41:06.559503    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:41:06.681358    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:41:06.681448    5124 controller.go:555] controller loop complete\nI0910 05:41:16.683568    5124 controller.go:187] starting controller iteration\nI0910 05:41:16.683599    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:16.684054    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:41:16.684328    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:16.685375    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:41:16.706106    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:41:16.706205    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:41:16.706224    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:41:16.706595    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:16.706677    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:16.706785    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:41:16.706946    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:41:16.706962    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:41:16.813651    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:41:16.813874    5124 controller.go:555] controller loop complete\nI0910 05:41:26.817534    5124 controller.go:187] starting controller iteration\nI0910 05:41:26.817585    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:26.818264    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:41:26.818423    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:26.818865    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:41:26.848372    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:41:26.848968    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:41:26.849160    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:41:26.849788    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:26.850435    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:26.850751    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:41:26.851007    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:41:26.851231    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:41:26.969874    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:41:26.969955    5124 controller.go:555] controller loop complete\nI0910 05:41:28.276911    5124 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:41:28.426420    5124 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:41:28.476976    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:28.477100    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:41:36.971513    5124 controller.go:187] starting controller iteration\nI0910 05:41:36.971598    5124 controller.go:264] Broadcasting leadership assertion with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:36.971978    5124 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > leadership_token:\"Wolx0lcDXnJukyfBInCS0w\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" > > \nI0910 05:41:36.972118    5124 controller.go:293] I am leader with token \"Wolx0lcDXnJukyfBInCS0w\"\nI0910 05:41:36.972640    5124 controller.go:703] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002]\nI0910 05:41:36.985194    5124 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.37.129:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"2WocR1fQSw54vUPtu7cp-w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:41:36.985291    5124 controller.go:301] etcd cluster members: map[3671789036165063137:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4002\"],\"ID\":\"3671789036165063137\"}]\nI0910 05:41:36.985325    5124 controller.go:639] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:41:36.985730    5124 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:36.985750    5124 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-events-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:41:36.985812    5124 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:41:36.986013    5124 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:41:36.986084    5124 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0910 05:41:37.096397    5124 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:41:37.096483    5124 controller.go:555] controller loop complete\n==== END logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-37-129.us-west-2.compute.internal ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-main-ip-172-20-37-129.us-west-2.compute.internal ====\netcd-manager\nI0910 05:35:25.049942    5180 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0910 05:35:25.053611    5180 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0910 05:35:25.054203    5180 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0910 05:35:25.054616    5180 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0910 05:35:25.055002    5180 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0910 05:35:25.055385    5180 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/main k8s.io/role/master=1 kubernetes.io/cluster/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/main\nI0910 05:35:25.056761    5180 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:35:25.195238    5180 mounter.go:304] Trying to mount master volume: \"vol-0dc8c1c00f03317ac\"\nI0910 05:35:25.195469    5180 volumes.go:331] Trying to attach volume \"vol-0dc8c1c00f03317ac\" at \"/dev/xvdu\"\nI0910 05:35:25.195773    5180 volumes.go:86] AWS API Request: ec2/AttachVolume\nW0910 05:35:25.532346    5180 volumes.go:343] Invalid value '/dev/xvdu' for unixDevice. Attachment point /dev/xvdu is already in use\nI0910 05:35:25.532364    5180 volumes.go:331] Trying to attach volume \"vol-0dc8c1c00f03317ac\" at \"/dev/xvdv\"\nI0910 05:35:25.532476    5180 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0910 05:35:25.936365    5180 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-09-10 05:35:25.924 +0000 UTC,\n  Device: \"/dev/xvdv\",\n  InstanceId: \"i-02f21e556c94dcd7c\",\n  State: \"attaching\",\n  VolumeId: \"vol-0dc8c1c00f03317ac\"\n}\nI0910 05:35:25.936678    5180 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:35:26.103131    5180 mounter.go:318] Currently attached volumes: [0xc0005efd00]\nI0910 05:35:26.103212    5180 mounter.go:72] Master volume \"vol-0dc8c1c00f03317ac\" is attached at \"/dev/xvdv\"\nI0910 05:35:26.103242    5180 mounter.go:86] Doing safe-format-and-mount of /dev/xvdv to /mnt/master-vol-0dc8c1c00f03317ac\nI0910 05:35:26.103285    5180 volumes.go:234] volume vol-0dc8c1c00f03317ac not mounted at /rootfs/dev/xvdv\nI0910 05:35:26.103316    5180 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0dc8c1c00f03317ac\"\nI0910 05:35:26.103338    5180 volumes.go:251] volume vol-0dc8c1c00f03317ac not mounted at nvme-Amazon_Elastic_Block_Store_vol0dc8c1c00f03317ac\nI0910 05:35:26.103370    5180 mounter.go:121] Waiting for volume \"vol-0dc8c1c00f03317ac\" to be mounted\nI0910 05:35:27.103467    5180 volumes.go:234] volume vol-0dc8c1c00f03317ac not mounted at /rootfs/dev/xvdv\nI0910 05:35:27.103574    5180 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0dc8c1c00f03317ac\" at \"/dev/nvme2n1\"\nI0910 05:35:27.103588    5180 mounter.go:125] Found volume \"vol-0dc8c1c00f03317ac\" mounted at device \"/dev/nvme2n1\"\nI0910 05:35:27.104394    5180 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0dc8c1c00f03317ac\"\nI0910 05:35:27.104474    5180 mounter.go:176] Mounting device \"/dev/nvme2n1\" on \"/mnt/master-vol-0dc8c1c00f03317ac\"\nI0910 05:35:27.104485    5180 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0910 05:35:27.104501    5180 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0910 05:35:27.125539    5180 mount_linux.go:449] Output: \"\"\nI0910 05:35:27.125560    5180 mount_linux.go:408] Disk \"/dev/nvme2n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme2n1]\nI0910 05:35:27.125578    5180 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme2n1]\nI0910 05:35:27.425678    5180 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme2n1 /mnt/master-vol-0dc8c1c00f03317ac\nI0910 05:35:27.425698    5180 mount_linux.go:436] Attempting to mount disk /dev/nvme2n1 in ext4 format at /mnt/master-vol-0dc8c1c00f03317ac\nI0910 05:35:27.425730    5180 nsenter.go:80] nsenter mount /dev/nvme2n1 /mnt/master-vol-0dc8c1c00f03317ac ext4 [defaults]\nI0910 05:35:27.425755    5180 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0dc8c1c00f03317ac --scope -- /bin/mount -t ext4 -o defaults /dev/nvme2n1 /mnt/master-vol-0dc8c1c00f03317ac]\nI0910 05:35:27.444434    5180 nsenter.go:84] Output of mounting /dev/nvme2n1 to /mnt/master-vol-0dc8c1c00f03317ac: Running scope as unit: run-r9b91cfa32d9e456db1d55a9e9739f58e.scope\nI0910 05:35:27.444455    5180 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme2n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1])\nI0910 05:35:27.444477    5180 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1]\nI0910 05:35:27.456819    5180 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme2n1\\nTYPE=ext4\\n\"\nI0910 05:35:27.456837    5180 resizefs_linux.go:55] ResizeFS.Resize - Expanding mounted volume /dev/nvme2n1\nI0910 05:35:27.456849    5180 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme2n1]\nI0910 05:35:27.459263    5180 resizefs_linux.go:70] Device /dev/nvme2n1 resized successfully\nI0910 05:35:27.471890    5180 mount_linux.go:206] Detected OS with systemd\nI0910 05:35:27.474211    5180 mounter.go:224] mounting inside container: /rootfs/dev/nvme2n1 -> /rootfs/mnt/master-vol-0dc8c1c00f03317ac\nI0910 05:35:27.474242    5180 mount_linux.go:175] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /rootfs/mnt/master-vol-0dc8c1c00f03317ac --scope -- mount  /rootfs/dev/nvme2n1 /rootfs/mnt/master-vol-0dc8c1c00f03317ac)\nI0910 05:35:27.486461    5180 mounter.go:94] mounted master volume \"vol-0dc8c1c00f03317ac\" on /mnt/master-vol-0dc8c1c00f03317ac\nI0910 05:35:27.486491    5180 main.go:320] discovered IP address: 172.20.37.129\nI0910 05:35:27.486496    5180 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0dc8c1c00f03317ac\nI0910 05:35:27.770010    5180 certs.go:211] generating certificate for \"etcd-manager-server-etcd-a\"\nI0910 05:35:27.928839    5180 certs.go:211] generating certificate for \"etcd-manager-client-etcd-a\"\nI0910 05:35:27.933644    5180 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-a\"\nI0910 05:35:27.934356    5180 main.go:473] peerClientIPs: [172.20.37.129]\nI0910 05:35:28.108998    5180 certs.go:211] generating certificate for \"etcd-manager-etcd-a\"\nI0910 05:35:28.110908    5180 server.go:105] GRPC server listening on \"172.20.37.129:3996\"\nI0910 05:35:28.111328    5180 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0910 05:35:28.221133    5180 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0910 05:35:28.266426    5180 peers.go:115] found new candidate peer from discovery: etcd-a [{172.20.37.129 0} {172.20.37.129 0}]\nI0910 05:35:28.266482    5180 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:28.266650    5180 peers.go:295] connecting to peer \"etcd-a\" with TLS policy, servername=\"etcd-manager-server-etcd-a\"\nI0910 05:35:30.111037    5180 controller.go:187] starting controller iteration\nI0910 05:35:30.111442    5180 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > > \nI0910 05:35:30.111684    5180 commands.go:41] refreshing commands\nI0910 05:35:30.111865    5180 s3context.go:334] product_uuid is \"ec23a353-0d6c-5600-b380-62777f3e55ab\", assuming running on EC2\nI0910 05:35:30.113383    5180 s3context.go:166] got region from metadata: \"us-west-2\"\nI0910 05:35:30.142798    5180 s3context.go:213] found bucket in region \"us-west-1\"\nI0910 05:35:30.335061    5180 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0910 05:35:30.335087    5180 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0910 05:35:40.375315    5180 controller.go:187] starting controller iteration\nI0910 05:35:40.375498    5180 controller.go:264] Broadcasting leadership assertion with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:40.375860    5180 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > > \nI0910 05:35:40.376116    5180 controller.go:293] I am leader with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:40.376539    5180 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" > }\nI0910 05:35:40.376778    5180 controller.go:301] etcd cluster members: map[]\nI0910 05:35:40.376875    5180 controller.go:639] sending member map to all peers: \nI0910 05:35:40.377265    5180 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:40.377402    5180 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0910 05:35:40.490909    5180 controller.go:357] detected that there is no existing cluster\nI0910 05:35:40.490924    5180 commands.go:41] refreshing commands\nI0910 05:35:40.598060    5180 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control: 0 commands\nI0910 05:35:40.598081    5180 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0910 05:35:40.632142    5180 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:40.632509    5180 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:40.632549    5180 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:40.632636    5180 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:40.632834    5180 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" > }]\nI0910 05:35:40.633320    5180 newcluster.go:153] JoinClusterResponse: \nI0910 05:35:40.634293    5180 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"67Dy5dLv9XnowRTW46bRNA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0910 05:35:40.634328    5180 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\nI0910 05:35:40.634920    5180 pki.go:58] adding peerClientIPs [172.20.37.129]\nI0910 05:35:40.634944    5180 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[172.20.37.129 127.0.0.1]} Usages:[2 1]}\nI0910 05:35:41.268279    5180 certs.go:211] generating certificate for \"etcd-a\"\nI0910 05:35:41.270690    5180 pki.go:108] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0910 05:35:41.563390    5180 certs.go:211] generating certificate for \"etcd-a\"\nI0910 05:35:41.814957    5180 certs.go:211] generating certificate for \"etcd-a\"\nI0910 05:35:41.816856    5180 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0910 05:35:41.817546    5180 newcluster.go:171] JoinClusterResponse: \nI0910 05:35:41.817642    5180 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-spec\"\nI0910 05:35:41.817679    5180 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-09-10 05:35:41.824541 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\n2021-09-10 05:35:41.824573 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.crt\n2021-09-10 05:35:41.824584 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:41.824596 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\n2021-09-10 05:35:41.824606 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-10 05:35:41.824634 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\n2021-09-10 05:35:41.824639 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\n2021-09-10 05:35:41.824643 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-09-10 05:35:41.824649 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=67Dy5dLv9XnowRTW46bRNA\n2021-09-10 05:35:41.824653 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.key\n2021-09-10 05:35:41.824660 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3994\n2021-09-10 05:35:41.824668 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-09-10 05:35:41.824676 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-10 05:35:41.824683 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-10 05:35:41.824693 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-09-10 05:35:41.824700 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.crt\n2021-09-10 05:35:41.824705 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:41.824711 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.key\n2021-09-10 05:35:41.824716 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/ca.crt\n2021-09-10 05:35:41.824742 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/ca.crt\n2021-09-10 05:35:41.824752 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.824Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.824Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.crt, key = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.825Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3994\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.825Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-a=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"67Dy5dLv9XnowRTW46bRNA\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.830Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA/member/snap/db\",\"took\":\"4.207895ms\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.831Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.37.129:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.831Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\",\"host\":\"etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\",\"resolved-addr\":\"172.20.37.129:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.836Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"73a59233206550e4\",\"cluster-id\":\"dd1aa8abe2fb79bc\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.837Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"73a59233206550e4 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.837Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"73a59233206550e4 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.837Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 73a59233206550e4 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.837Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"73a59233206550e4 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.837Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"73a59233206550e4 switched to configuration voters=(8333227433803469028)\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-10T05:35:41.840Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.845Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.847Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"73a59233206550e4\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.848Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"73a59233206550e4\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.848Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"73a59233206550e4 switched to configuration voters=(8333227433803469028)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.848Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"local-member-id\":\"73a59233206550e4\",\"added-peer-id\":\"73a59233206550e4\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.849Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.crt, key = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.849Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"73a59233206550e4\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\"],\"listen-client-urls\":[\"https://0.0.0.0:3994\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:41.849Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\nI0910 05:35:41.889029    5180 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0910 05:35:41.937375    5180 controller.go:187] starting controller iteration\nI0910 05:35:41.937403    5180 controller.go:264] Broadcasting leadership assertion with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:41.937863    5180 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > > \nI0910 05:35:41.938112    5180 controller.go:293] I am leader with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:41.938856    5180 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994]\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.637Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"73a59233206550e4 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.637Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"73a59233206550e4 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.637Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"73a59233206550e4 received MsgVoteResp from 73a59233206550e4 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.637Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"73a59233206550e4 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.637Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 73a59233206550e4 elected leader 73a59233206550e4 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.637Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"73a59233206550e4\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994]}\",\"request-path\":\"/0/members/73a59233206550e4/attributes\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.638Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.638Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"local-member-id\":\"73a59233206550e4\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.638Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.638Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.639Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3994\"}\nI0910 05:35:42.657934    5180 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\"],\"ID\":\"8333227433803469028\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"67Dy5dLv9XnowRTW46bRNA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0910 05:35:42.658295    5180 controller.go:301] etcd cluster members: map[8333227433803469028:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\"],\"ID\":\"8333227433803469028\"}]\nI0910 05:35:42.658383    5180 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:42.658649    5180 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:42.658667    5180 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:42.658750    5180 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:42.658926    5180 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:42.659028    5180 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0910 05:35:42.695067    5180 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:35:42.695856    5180 backup.go:128] performing snapshot save to /tmp/714183436/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.701Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:211\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.703Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.703Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.704Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:42.705Z\",\"logger\":\"etcd-client\",\"caller\":\"v3/maintenance.go:219\",\"msg\":\"completed snapshot read; closing\"}\nI0910 05:35:42.705358    5180 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/2021-09-10T05:35:42Z-000001/etcd.backup.gz\"\nI0910 05:35:42.799719    5180 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/2021-09-10T05:35:42Z-000001/_etcd_backup.meta\"\nI0910 05:35:42.843922    5180 backup.go:153] backup complete: name:\"2021-09-10T05:35:42Z-000001\" \nI0910 05:35:42.844454    5180 controller.go:935] backup response: name:\"2021-09-10T05:35:42Z-000001\" \nI0910 05:35:42.844492    5180 controller.go:574] took backup: name:\"2021-09-10T05:35:42Z-000001\" \nI0910 05:35:42.885239    5180 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main: [2021-09-10T05:35:42Z-000001]\nI0910 05:35:42.885261    5180 cleanup.go:166] retaining backup \"2021-09-10T05:35:42Z-000001\"\nI0910 05:35:42.885288    5180 restore.go:98] Setting quarantined state to false\nI0910 05:35:42.885644    5180 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" cluster_name:\"etcd\" > \nI0910 05:35:42.885690    5180 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" cluster_name:\"etcd\" > \nI0910 05:35:42.885701    5180 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\nI0910 05:35:42.885955    5180 etcdprocess.go:131] Waiting for etcd to exit\nI0910 05:35:42.986588    5180 etcdprocess.go:131] Waiting for etcd to exit\nI0910 05:35:42.986611    5180 etcdprocess.go:136] Exited etcd: signal: killed\nI0910 05:35:42.986678    5180 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"67Dy5dLv9XnowRTW46bRNA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0910 05:35:42.986842    5180 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0910 05:35:42.986852    5180 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"67Dy5dLv9XnowRTW46bRNA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0910 05:35:42.986885    5180 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\nI0910 05:35:42.987083    5180 pki.go:58] adding peerClientIPs [172.20.37.129]\nI0910 05:35:42.987131    5180 pki.go:66] generating peer keypair for etcd: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[172.20.37.129 127.0.0.1]} Usages:[2 1]}\nI0910 05:35:42.987400    5180 certs.go:151] existing certificate not valid after 2023-09-10T05:35:41Z; will regenerate\nI0910 05:35:42.987413    5180 certs.go:211] generating certificate for \"etcd-a\"\nI0910 05:35:42.989730    5180 pki.go:108] building client-serving certificate: {CommonName:etcd-a Organization:[] AltNames:{DNSNames:[etcd-a etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0910 05:35:42.989927    5180 certs.go:151] existing certificate not valid after 2023-09-10T05:35:41Z; will regenerate\nI0910 05:35:42.989941    5180 certs.go:211] generating certificate for \"etcd-a\"\nI0910 05:35:43.195911    5180 certs.go:211] generating certificate for \"etcd-a\"\nI0910 05:35:43.197878    5180 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0910 05:35:43.198419    5180 restore.go:116] ReconfigureResponse: \nI0910 05:35:43.199647    5180 controller.go:187] starting controller iteration\nI0910 05:35:43.199671    5180 controller.go:264] Broadcasting leadership assertion with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:43.199908    5180 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > > \nI0910 05:35:43.200026    5180 controller.go:293] I am leader with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:43.200459    5180 controller.go:703] base client OK for etcd for client urls [https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001]\n2021-09-10 05:35:43.205838 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\n2021-09-10 05:35:43.205874 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.crt\n2021-09-10 05:35:43.205881 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:43.205890 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\n2021-09-10 05:35:43.205901 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-09-10 05:35:43.206095 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\n2021-09-10 05:35:43.206106 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-a=https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\n2021-09-10 05:35:43.206112 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing\n2021-09-10 05:35:43.206194 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=67Dy5dLv9XnowRTW46bRNA\n2021-09-10 05:35:43.206206 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.key\n2021-09-10 05:35:43.206294 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:4001\n2021-09-10 05:35:43.206308 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380\n2021-09-10 05:35:43.206316 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-09-10 05:35:43.206408 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-09-10 05:35:43.206418 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-a\n2021-09-10 05:35:43.206425 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.crt\n2021-09-10 05:35:43.206458 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-09-10 05:35:43.206465 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.key\n2021-09-10 05:35:43.206477 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/ca.crt\n2021-09-10 05:35:43.206492 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/ca.crt\n2021-09-10 05:35:43.206535 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.206Z\",\"caller\":\"etcdmain/etcd.go:134\",\"msg\":\"server has been already initialized\",\"data-dir\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\",\"dir-type\":\"member\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.206Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.206Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.crt, key = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.207Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:4001\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.207Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":true,\"name\":\"etcd-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"\",\"initial-cluster-state\":\"existing\",\"initial-cluster-token\":\"\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.207Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0dc8c1c00f03317ac/data/67Dy5dLv9XnowRTW46bRNA/member/snap/db\",\"took\":\"108.237µs\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.208Z\",\"caller\":\"etcdserver/raft.go:536\",\"msg\":\"restarting local member\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"local-member-id\":\"73a59233206550e4\",\"commit-index\":4}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.209Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"73a59233206550e4 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.209Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"73a59233206550e4 became follower at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.209Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft 73a59233206550e4 [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]\"}\n{\"level\":\"warn\",\"ts\":\"2021-09-10T05:35:43.210Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.211Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.213Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"73a59233206550e4\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.213Z\",\"caller\":\"etcdserver/server.go:691\",\"msg\":\"starting initial election tick advance\",\"election-ticks\":10}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.214Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"73a59233206550e4 switched to configuration voters=(8333227433803469028)\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.215Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"local-member-id\":\"73a59233206550e4\",\"added-peer-id\":\"73a59233206550e4\",\"added-peer-peer-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.215Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"local-member-id\":\"73a59233206550e4\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.215Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.217Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.crt, key = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0dc8c1c00f03317ac/pki/67Dy5dLv9XnowRTW46bRNA/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.217Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"73a59233206550e4\",\"initial-advertise-peer-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"listen-peer-urls\":[\"https://0.0.0.0:2380\"],\"advertise-client-urls\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\"],\"listen-client-urls\":[\"https://0.0.0.0:4001\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:43.217Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2380\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.909Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"73a59233206550e4 is starting a new election at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.909Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"73a59233206550e4 became candidate at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.909Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"73a59233206550e4 received MsgVoteResp from 73a59233206550e4 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.909Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"73a59233206550e4 became leader at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.909Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: 73a59233206550e4 elected leader 73a59233206550e4 at term 3\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.910Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"73a59233206550e4\",\"local-member-attributes\":\"{Name:etcd-a ClientURLs:[https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001]}\",\"request-path\":\"/0/members/73a59233206550e4/attributes\",\"cluster-id\":\"dd1aa8abe2fb79bc\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-09-10T05:35:44.912Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:4001\"}\nI0910 05:35:44.939087    5180 controller.go:300] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\"],\"ID\":\"8333227433803469028\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" }, info=cluster_name:\"etcd\" node_configuration:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" > etcd_state:<cluster:<cluster_token:\"67Dy5dLv9XnowRTW46bRNA\" nodes:<name:\"etcd-a\" peer_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\" client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\" quarantined_client_urls:\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:3994\" tls_enabled:true > > etcd_version:\"3.4.13\" > }\nI0910 05:35:44.939185    5180 controller.go:301] etcd cluster members: map[8333227433803469028:{\"name\":\"etcd-a\",\"peerURLs\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:2380\"],\"endpoints\":[\"https://etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:4001\"],\"ID\":\"8333227433803469028\"}]\nI0910 05:35:44.939201    5180 controller.go:639] sending member map to all peers: members:<name:\"etcd-a\" dns:\"etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io\" addresses:\"172.20.37.129\" > \nI0910 05:35:44.939413    5180 etcdserver.go:248] updating hosts: map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:44.939427    5180 hosts.go:84] hosts update: primary=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]], fallbacks=map[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io:[172.20.37.129 172.20.37.129]], final=map[172.20.37.129:[etcd-a.internal.e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io]]\nI0910 05:35:44.939481    5180 hosts.go:181] skipping update of unchanged /etc/hosts\nI0910 05:35:44.939563    5180 commands.go:38] not refreshing commands - TTL not hit\nI0910 05:35:44.939576    5180 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-bb46e46694-4f6f7.test-cncf-aws.k8s.io/backups/etcd/main/control/etcd-cluster-created\"\nI0910 05:35:44.976900    5180 controller.go:393] spec member_count:1 etcd_version:\"3.4.13\" \nI0910 05:35:44.976977    5180 controller.go:555] controller loop complete\nI0910 05:35:54.978592    5180 controller.go:187] starting controller iteration\nI0910 05:35:54.978764    5180 controller.go:264] Broadcasting leadership assertion with token \"cZvv7xkDc8zBOLAVfD-VEA\"\nI0910 05:35:54.979111    5180 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > leadership_token:\"cZvv7xkDc8zBOLAVfD-VEA\" healthy:<id:\"etcd-a\" endpoints:\"172.20.37.129:3996\" > > \nI0910 05:3